--- I0223 13:08:15.356115 1 main.go:213] Received signal terminated. Forwarding to sub-process "hyperkube". I0223 13:08:15.356477 9 genericapiserver.go:554] "[graceful-termination] shutdown event" name="ShutdownInitiated" I0223 13:08:15.356526 9 patch_genericapiserver.go:97] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-master-0", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ShutdownInitiated' Received signal to terminate, becoming unready, but keeping serving I0223 13:08:15.356768 9 controller.go:128] Shutting down kubernetes service endpoint reconciler I0223 13:08:15.361917 9 genericapiserver.go:544] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration" I0223 13:08:15.361978 9 patch_genericapiserver.go:97] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-master-0", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'AfterShutdownDelayDuration' The minimal shutdown duration of 0s finished W0223 13:08:15.370977 9 lease.go:265] Resetting endpoints for master service "kubernetes" to [] I0223 13:08:15.388940 9 genericapiserver.go:721] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" I0223 13:08:15.388983 9 patch_genericapiserver.go:97] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-master-0", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationPreShutdownHooksFinished' All pre-shutdown hooks have been finished I0223 13:08:15.389189 9 genericapiserver.go:633] "[graceful-termination] shutdown event" name="NotAcceptingNewRequest" I0223 13:08:15.389213 9 patch_genericapiserver.go:97] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-master-0", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'InFlightRequestsDrained' All non long-running request(s) in-flight have drained I0223 13:08:15.389319 9 genericapiserver.go:679] "[graceful-termination] not going to wait for active watch request(s) to drain" I0223 13:08:15.393587 9 genericapiserver.go:670] [graceful-termination] in-flight non long-running request(s) have drained I0223 13:08:15.393632 9 genericapiserver.go:711] "[graceful-termination] shutdown event" name="InFlightRequestsDrained" I0223 13:08:15.393781 9 controller.go:86] Shutting down OpenAPI V3 AggregationController I0223 13:08:15.393813 9 controller.go:157] Shutting down quota evaluator I0223 13:08:15.393825 9 controller.go:176] quota evaluator worker shutdown I0223 13:08:15.394903 9 genericapiserver.go:735] "[graceful-termination] audit backend shutdown completed" I0223 13:08:15.394915 9 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/etc/kubernetes/static-pod-certs/secrets/aggregator-client/tls.crt::/etc/kubernetes/static-pod-certs/secrets/aggregator-client/tls.key" I0223 13:08:15.394953 9 apiaccess_count_controller.go:95] Shutting down APIRequestCount controller. I0223 13:08:15.394977 9 dynamic_cafile_content.go:175] "Shutting down controller" name="request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" I0223 13:08:15.394993 9 dynamic_cafile_content.go:175] "Shutting down controller" name="request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt" I0223 13:08:15.395009 9 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" I0223 13:08:15.395094 9 controller.go:170] Shutting down OpenAPI controller I0223 13:08:15.395118 9 secure_serving.go:258] Stopped listening on [::]:6443 I0223 13:08:15.395139 9 genericapiserver.go:624] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" I0223 13:08:15.395158 9 patch_genericapiserver.go:97] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-master-0", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'HTTPServerStoppedListening' HTTP Server has stopped listening I0223 13:08:15.395358 9 clusterquotamapping.go:142] Shutting down ClusterQuotaMappingController controller I0223 13:08:15.395408 9 controller.go:120] Shutting down OpenAPI V3 controller I0223 13:08:15.395425 9 tlsconfig.go:258] "Shutting down DynamicServingCertificateController" I0223 13:08:15.395916 9 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting" I0223 13:08:15.396487 9 dynamic_serving_content.go:149] "Shutting down controller" name="sni-serving-cert::/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.key" I0223 13:08:15.396954 9 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt" I0223 13:08:15.396989 9 dynamic_serving_content.go:149] "Shutting down controller" name="sni-serving-cert::/etc/kubernetes/static-pod-certs/secrets/localhost-serving-cert-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/localhost-serving-cert-certkey/tls.key" I0223 13:08:15.397007 9 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.key" I0223 13:08:15.395429 9 customresource_discovery_controller.go:328] Shutting down DiscoveryController I0223 13:08:15.395443 9 autoregister_controller.go:168] Shutting down autoregister controller I0223 13:08:15.395459 9 crdregistration_controller.go:146] Shutting down crd-autoregister controller I0223 13:08:15.395471 9 apiapproval_controller.go:201] Shutting down KubernetesAPIApprovalPolicyConformantConditionController I0223 13:08:15.395482 9 nonstructuralschema_controller.go:207] Shutting down NonStructuralSchemaConditionController I0223 13:08:15.395492 9 establishing_controller.go:92] Shutting down EstablishingController I0223 13:08:15.395502 9 naming_controller.go:305] Shutting down NamingConditionController I0223 13:08:15.395550 9 cluster_authentication_trust_controller.go:466] Shutting down cluster_authentication_trust_controller controller I0223 13:08:15.395599 9 storage_flowcontrol.go:186] APF bootstrap ensurer is exiting I0223 13:08:15.395831 9 controller.go:176] quota evaluator worker shutdown I0223 13:08:15.395895 9 controller.go:84] Shutting down OpenAPI AggregationController I0223 13:08:15.395952 9 controller.go:176] quota evaluator worker shutdown I0223 13:08:15.395958 9 controller.go:176] quota evaluator worker shutdown I0223 13:08:15.396035 9 crd_finalizer.go:281] Shutting down CRDFinalizer I0223 13:08:15.396045 9 apiservice_controller.go:134] Shutting down APIServiceRegistrationController E0223 13:08:15.397408 9 wrap.go:53] "Timeout or abort while handling" logger="UnhandledError" method="GET" URI="/apis/user.openshift.io/v1/groups?allowWatchBookmarks=true&resourceVersion=10759&timeout=7m54s&timeoutSeconds=474&watch=true" auditID="bbe7fdb6-3b00-4947-ab93-8e9039dfe5be" I0223 13:08:15.396059 9 system_namespaces_controller.go:76] Shutting down system namespaces controller I0223 13:08:15.396070 9 local_available_controller.go:172] Shutting down LocalAvailability controller I0223 13:08:15.396082 9 remote_available_controller.go:449] Shutting down RemoteAvailability controller I0223 13:08:15.396686 9 controller.go:176] quota evaluator worker shutdown I0223 13:08:15.396905 9 apf_controller.go:389] Shutting down API Priority and Fairness config worker I0223 13:08:15.397493 9 dynamic_serving_content.go:149] "Shutting down controller" name="sni-serving-cert::/etc/kubernetes/static-pod-certs/secrets/internal-loadbalancer-serving-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/internal-loadbalancer-serving-certkey/tls.key" I0223 13:08:15.396931 9 gc_controller.go:91] Shutting down apiserver lease garbage collector I0223 13:08:15.396936 9 controller.go:132] Ending legacy_token_tracking_controller I0223 13:08:15.397568 9 controller.go:133] Shutting down legacy_token_tracking_controller I0223 13:08:15.398221 9 dynamic_serving_content.go:149] "Shutting down controller" name="sni-serving-cert::/etc/kubernetes/static-pod-resources/secrets/localhost-recovery-serving-certkey/tls.crt::/etc/kubernetes/static-pod-resources/secrets/localhost-recovery-serving-certkey/tls.key" I0223 13:08:15.398753 9 dynamic_serving_content.go:149] "Shutting down controller" name="sni-serving-cert::/etc/kubernetes/static-pod-certs/secrets/external-loadbalancer-serving-certkey/tls.crt::/etc/kubernetes/static-pod-certs/secrets/external-loadbalancer-serving-certkey/tls.key" I0223 13:08:17.396076 9 genericapiserver.go:742] [graceful-termination] apiserver is exiting I0223 13:08:17.396149 9 patch_genericapiserver.go:97] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-master-0", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationGracefulTerminationFinished' All pending requests processed W0223 13:08:17.401414 9 cacher.go:171] Terminating all watchers from cacher apiservices.apiregistration.k8s.io W0223 13:08:17.402020 9 cacher.go:171] Terminating all watchers from cacher podtemplates W0223 13:08:17.402819 9 cacher.go:171] Terminating all watchers from cacher serviceaccounts W0223 13:08:17.403871 9 cacher.go:171] Terminating all watchers from cacher persistentvolumes W0223 13:08:17.404760 9 cacher.go:171] Terminating all watchers from cacher persistentvolumeclaims W0223 13:08:17.406134 9 cacher.go:171] Terminating all watchers from cacher configmaps W0223 13:08:17.417365 9 cacher.go:171] Terminating all watchers from cacher replicationcontrollers W0223 13:08:17.418539 9 cacher.go:171] Terminating all watchers from cacher nodes W0223 13:08:17.424772 9 cacher.go:171] Terminating all watchers from cacher resourcequotas W0223 13:08:17.425411 9 cacher.go:171] Terminating all watchers from cacher endpoints W0223 13:08:17.425995 9 cacher.go:171] Terminating all watchers from cacher namespaces W0223 13:08:17.427156 9 cacher.go:171] Terminating all watchers from cacher secrets W0223 13:08:17.428545 9 cacher.go:171] Terminating all watchers from cacher pods W0223 13:08:17.429600 9 cacher.go:171] Terminating all watchers from cacher services W0223 13:08:17.431673 9 cacher.go:171] Terminating all watchers from cacher limitranges W0223 13:08:17.433351 9 cacher.go:171] Terminating all watchers from cacher horizontalpodautoscalers.autoscaling W0223 13:08:17.435224 9 cacher.go:171] Terminating all watchers from cacher jobs.batch W0223 13:08:17.435747 9 cacher.go:171] Terminating all watchers from cacher cronjobs.batch W0223 13:08:17.436333 9 cacher.go:171] Terminating all watchers from cacher certificatesigningrequests.certificates.k8s.io W0223 13:08:17.436971 9 cacher.go:171] Terminating all watchers from cacher leases.coordination.k8s.io W0223 13:08:17.438346 9 cacher.go:171] Terminating all watchers from cacher endpointslices.discovery.k8s.io W0223 13:08:17.439178 9 cacher.go:171] Terminating all watchers from cacher networkpolicies.networking.k8s.io W0223 13:08:17.439643 9 cacher.go:171] Terminating all watchers from cacher ingresses.networking.k8s.io W0223 13:08:17.439991 9 cacher.go:171] Terminating all watchers from cacher ingressclasses.networking.k8s.io W0223 13:08:17.440380 9 cacher.go:171] Terminating all watchers from cacher runtimeclasses.node.k8s.io W0223 13:08:17.440672 9 cacher.go:171] Terminating all watchers from cacher poddisruptionbudgets.policy W0223 13:08:17.441185 9 cacher.go:171] Terminating all watchers from cacher clusterroles.rbac.authorization.k8s.io W0223 13:08:17.442042 9 cacher.go:171] Terminating all watchers from cacher clusterrolebindings.rbac.authorization.k8s.io W0223 13:08:17.443100 9 cacher.go:171] Terminating all watchers from cacher roles.rbac.authorization.k8s.io W0223 13:08:17.444992 9 cacher.go:171] Terminating all watchers from cacher rolebindings.rbac.authorization.k8s.io W0223 13:08:17.447375 9 cacher.go:171] Terminating all watchers from cacher csistoragecapacities.storage.k8s.io W0223 13:08:17.449628 9 cacher.go:171] Terminating all watchers from cacher storageclasses.storage.k8s.io W0223 13:08:17.450435 9 cacher.go:171] Terminating all watchers from cacher volumeattachments.storage.k8s.io W0223 13:08:17.450985 9 cacher.go:171] Terminating all watchers from cacher csinodes.storage.k8s.io W0223 13:08:17.451413 9 cacher.go:171] Terminating all watchers from cacher csidrivers.storage.k8s.io W0223 13:08:17.453055 9 cacher.go:171] Terminating all watchers from cacher deployments.apps W0223 13:08:17.454834 9 cacher.go:171] Terminating all watchers from cacher statefulsets.apps W0223 13:08:17.455507 9 cacher.go:171] Terminating all watchers from cacher daemonsets.apps W0223 13:08:17.456372 9 cacher.go:171] Terminating all watchers from cacher replicasets.apps W0223 13:08:17.457136 9 cacher.go:171] Terminating all watchers from cacher controllerrevisions.apps W0223 13:08:17.457799 9 cacher.go:171] Terminating all watchers from cacher validatingwebhookconfigurations.admissionregistration.k8s.io W0223 13:08:17.459613 9 cacher.go:171] Terminating all watchers from cacher mutatingwebhookconfigurations.admissionregistration.k8s.io W0223 13:08:17.460504 9 cacher.go:171] Terminating all watchers from cacher validatingadmissionpolicies.admissionregistration.k8s.io W0223 13:08:17.460925 9 cacher.go:171] Terminating all watchers from cacher validatingadmissionpolicybindings.admissionregistration.k8s.io W0223 13:08:17.462560 9 cacher.go:171] Terminating all watchers from cacher customresourcedefinitions.apiextensions.k8s.io W0223 13:08:17.463422 9 cacher.go:171] Terminating all watchers from cacher storages.operator.openshift.io W0223 13:08:17.463942 9 cacher.go:171] Terminating all watchers from cacher adminnetworkpolicies.policy.networking.k8s.io W0223 13:08:17.464459 9 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io W0223 13:08:17.465493 9 cacher.go:171] Terminating all watchers from cacher images.config.openshift.io W0223 13:08:17.466867 9 cacher.go:171] Terminating all watchers from cacher operatorconditions.operators.coreos.com W0223 13:08:17.467588 9 cacher.go:171] Terminating all watchers from cacher ingresscontrollers.operator.openshift.io W0223 13:08:17.468341 9 cacher.go:171] Terminating all watchers from cacher imagecontentsourcepolicies.operator.openshift.io W0223 13:08:17.468807 9 cacher.go:171] Terminating all watchers from cacher openshiftcontrollermanagers.operator.openshift.io W0223 13:08:17.469215 9 cacher.go:171] Terminating all watchers from cacher clusteruserdefinednetworks.k8s.ovn.org W0223 13:08:17.470141 9 cacher.go:171] Terminating all watchers from cacher operatorgroups.operators.coreos.com W0223 13:08:17.470675 9 cacher.go:171] Terminating all watchers from cacher egressfirewalls.k8s.ovn.org W0223 13:08:17.471295 9 cacher.go:171] Terminating all watchers from cacher configs.samples.operator.openshift.io W0223 13:08:17.471999 9 cacher.go:171] Terminating all watchers from cacher consoles.config.openshift.io W0223 13:08:17.473155 9 cacher.go:171] Terminating all watchers from cacher hostfirmwarecomponents.metal3.io W0223 13:08:17.474008 9 cacher.go:171] Terminating all watchers from cacher ipaddressclaims.ipam.cluster.x-k8s.io W0223 13:08:17.474500 9 cacher.go:171] Terminating all watchers from cacher kubeletconfigs.machineconfiguration.openshift.io W0223 13:08:17.476432 9 cacher.go:171] Terminating all watchers from cacher ipaddresses.ipam.cluster.x-k8s.io W0223 13:08:17.476888 9 cacher.go:171] Terminating all watchers from cacher rolebindingrestrictions.authorization.openshift.io W0223 13:08:17.477223 9 cacher.go:171] Terminating all watchers from cacher nodeslicepools.whereabouts.cni.cncf.io W0223 13:08:17.477820 9 cacher.go:171] Terminating all watchers from cacher egressservices.k8s.ovn.org W0223 13:08:17.478279 9 cacher.go:171] Terminating all watchers from cacher machineconfigurations.operator.openshift.io W0223 13:08:17.478735 9 cacher.go:171] Terminating all watchers from cacher insightsoperators.operator.openshift.io W0223 13:08:17.479180 9 cacher.go:171] Terminating all watchers from cacher metal3remediationtemplates.infrastructure.cluster.x-k8s.io W0223 13:08:17.480004 9 cacher.go:171] Terminating all watchers from cacher schedulers.config.openshift.io W0223 13:08:17.480727 9 cacher.go:171] Terminating all watchers from cacher firmwareschemas.metal3.io W0223 13:08:17.481327 9 cacher.go:171] Terminating all watchers from cacher imagepruners.imageregistry.operator.openshift.io W0223 13:08:17.481671 9 cacher.go:171] Terminating all watchers from cacher machinehealthchecks.machine.openshift.io W0223 13:08:17.482921 9 cacher.go:171] Terminating all watchers from cacher performanceprofiles.performance.openshift.io W0223 13:08:17.484061 9 cacher.go:171] Terminating all watchers from cacher securitycontextconstraints.security.openshift.io W0223 13:08:17.486671 9 cacher.go:171] Terminating all watchers from cacher ipamclaims.k8s.cni.cncf.io W0223 13:08:17.487645 9 cacher.go:171] Terminating all watchers from cacher egressqoses.k8s.ovn.org W0223 13:08:17.488793 9 cacher.go:171] Terminating all watchers from cacher egressrouters.network.operator.openshift.io W0223 13:08:17.489749 9 cacher.go:171] Terminating all watchers from cacher clusterautoscalers.autoscaling.openshift.io W0223 13:08:17.490587 9 cacher.go:171] Terminating all watchers from cacher alertmanagerconfigs.monitoring.coreos.com W0223 13:08:17.491116 9 cacher.go:171] Terminating all watchers from cacher alertmanagerconfigs.monitoring.coreos.com W0223 13:08:17.491588 9 cacher.go:171] Terminating all watchers from cacher tuneds.tuned.openshift.io W0223 13:08:17.492860 9 cacher.go:171] Terminating all watchers from cacher configs.imageregistry.operator.openshift.io W0223 13:08:17.493471 9 cacher.go:171] Terminating all watchers from cacher machineconfigpools.machineconfiguration.openshift.io W0223 13:08:17.494428 9 cacher.go:171] Terminating all watchers from cacher adminpolicybasedexternalroutes.k8s.ovn.org W0223 13:08:17.494854 9 cacher.go:171] Terminating all watchers from cacher dnsrecords.ingress.operator.openshift.io W0223 13:08:17.495715 9 cacher.go:171] Terminating all watchers from cacher apiservers.config.openshift.io W0223 13:08:17.496371 9 cacher.go:171] Terminating all watchers from cacher operatorpkis.network.operator.openshift.io W0223 13:08:17.496780 9 cacher.go:171] Terminating all watchers from cacher provisionings.metal3.io W0223 13:08:17.497365 9 cacher.go:171] Terminating all watchers from cacher dnses.config.openshift.io W0223 13:08:17.497935 9 cacher.go:171] Terminating all watchers from cacher kubecontrollermanagers.operator.openshift.io W0223 13:08:17.498613 9 cacher.go:171] Terminating all watchers from cacher infrastructures.config.openshift.io W0223 13:08:17.499527 9 cacher.go:171] Terminating all watchers from cacher containerruntimeconfigs.machineconfiguration.openshift.io W0223 13:08:17.500625 9 cacher.go:171] Terminating all watchers from cacher overlappingrangeipreservations.whereabouts.cni.cncf.io W0223 13:08:17.501217 9 cacher.go:171] Terminating all watchers from cacher metal3remediations.infrastructure.cluster.x-k8s.io W0223 13:08:17.502012 9 cacher.go:171] Terminating all watchers from cacher alertmanagers.monitoring.coreos.com W0223 13:08:17.502538 9 cacher.go:171] Terminating all watchers from cacher operatorhubs.config.openshift.io W0223 13:08:17.503072 9 cacher.go:171] Terminating all watchers from cacher clusteroperators.config.openshift.io W0223 13:08:17.506386 9 cacher.go:171] Terminating all watchers from cacher kubestorageversionmigrators.operator.openshift.io W0223 13:08:17.508015 9 cacher.go:171] Terminating all watchers from cacher olmconfigs.operators.coreos.com W0223 13:08:17.509535 9 cacher.go:171] Terminating all watchers from cacher kubeapiservers.operator.openshift.io W0223 13:08:17.510390 9 cacher.go:171] Terminating all watchers from cacher baselineadminnetworkpolicies.policy.networking.k8s.io W0223 13:08:17.511079 9 cacher.go:171] Terminating all watchers from cacher dnses.operator.openshift.io W0223 13:08:17.511550 9 cacher.go:171] Terminating all watchers from cacher credentialsrequests.cloudcredential.openshift.io W0223 13:08:17.512165 9 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io W0223 13:08:17.513153 9 cacher.go:171] Terminating all watchers from cacher machineautoscalers.autoscaling.openshift.io W0223 13:08:17.513655 9 cacher.go:171] Terminating all watchers from cacher ingresses.config.openshift.io W0223 13:08:17.514817 9 cacher.go:171] Terminating all watchers from cacher installplans.operators.coreos.com W0223 13:08:17.516458 9 cacher.go:171] Terminating all watchers from cacher nodes.config.openshift.io W0223 13:08:17.517389 9 cacher.go:171] Terminating all watchers from cacher proxies.config.openshift.io W0223 13:08:17.519426 9 cacher.go:171] Terminating all watchers from cacher catalogsources.operators.coreos.com W0223 13:08:17.520384 9 cacher.go:171] Terminating all watchers from cacher machineconfigs.machineconfiguration.openshift.io W0223 13:08:17.521472 9 cacher.go:171] Terminating all watchers from cacher controllerconfigs.machineconfiguration.openshift.io W0223 13:08:17.522058 9 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io W0223 13:08:17.523521 9 cacher.go:171] Terminating all watchers from cacher csisnapshotcontrollers.operator.openshift.io W0223 13:08:17.524402 9 cacher.go:171] Terminating all watchers from cacher hostfirmwaresettings.metal3.io W0223 13:08:17.524854 9 cacher.go:171] Terminating all watchers from cacher clusterresourcequotas.quota.openshift.io W0223 13:08:17.525299 9 cacher.go:171] Terminating all watchers from cacher machinesets.machine.openshift.io W0223 13:08:17.525945 9 cacher.go:171] Terminating all watchers from cacher builds.config.openshift.io W0223 13:08:17.526815 9 cacher.go:171] Terminating all watchers from cacher networks.config.openshift.io W0223 13:08:17.528819 9 cacher.go:171] Terminating all watchers from cacher baremetalhosts.metal3.io W0223 13:08:17.529654 9 cacher.go:171] Terminating all watchers from cacher machines.machine.openshift.io W0223 13:08:17.531170 9 cacher.go:171] Terminating all watchers from cacher clusterserviceversions.operators.coreos.com W0223 13:08:17.531682 9 cacher.go:171] Terminating all watchers from cacher projecthelmchartrepositories.helm.openshift.io W0223 13:08:17.533489 9 cacher.go:171] Terminating all watchers from cacher oauths.config.openshift.io W0223 13:08:17.534360 9 cacher.go:171] Terminating all watchers from cacher subscriptions.operators.coreos.com W0223 13:08:17.534717 9 cacher.go:171] Terminating all watchers from cacher egressips.k8s.ovn.org W0223 13:08:17.535197 9 cacher.go:171] Terminating all watchers from cacher preprovisioningimages.metal3.io W0223 13:08:17.536047 9 cacher.go:171] Terminating all watchers from cacher servicemonitors.monitoring.coreos.com W0223 13:08:17.536503 9 cacher.go:171] Terminating all watchers from cacher dataimages.metal3.io W0223 13:08:17.537049 9 cacher.go:171] Terminating all watchers from cacher podmonitors.monitoring.coreos.com W0223 13:08:17.537996 9 cacher.go:171] Terminating all watchers from cacher servicecas.operator.openshift.io W0223 13:08:17.538833 9 cacher.go:171] Terminating all watchers from cacher alertingrules.monitoring.openshift.io W0223 13:08:17.539458 9 cacher.go:171] Terminating all watchers from cacher userdefinednetworks.k8s.ovn.org W0223 13:08:17.539923 9 cacher.go:171] Terminating all watchers from cacher probes.monitoring.coreos.com W0223 13:08:17.540367 9 cacher.go:171] Terminating all watchers from cacher storageversionmigrations.migration.k8s.io W0223 13:08:17.541367 9 cacher.go:171] Terminating all watchers from cacher clustercatalogs.olm.operatorframework.io W0223 13:08:17.541734 9 cacher.go:171] Terminating all watchers from cacher authentications.config.openshift.io W0223 13:08:17.542152 9 cacher.go:171] Terminating all watchers from cacher thanosrulers.monitoring.coreos.com W0223 13:08:17.542645 9 cacher.go:171] Terminating all watchers from cacher clusterversions.config.openshift.io W0223 13:08:17.544165 9 cacher.go:171] Terminating all watchers from cacher clustercsidrivers.operator.openshift.io W0223 13:08:17.544738 9 cacher.go:171] Terminating all watchers from cacher featuregates.config.openshift.io W0223 13:08:17.545619 9 cacher.go:171] Terminating all watchers from cacher profiles.tuned.openshift.io W0223 13:08:17.546269 9 cacher.go:171] Terminating all watchers from cacher operators.operators.coreos.com W0223 13:08:17.547182 9 cacher.go:171] Terminating all watchers from cacher prometheusrules.monitoring.coreos.com W0223 13:08:17.548233 9 cacher.go:171] Terminating all watchers from cacher imagedigestmirrorsets.config.openshift.io W0223 13:08:17.548777 9 cacher.go:171] Terminating all watchers from cacher network-attachment-definitions.k8s.cni.cncf.io W0223 13:08:17.549833 9 cacher.go:171] Terminating all watchers from cacher controlplanemachinesets.machine.openshift.io W0223 13:08:17.550670 9 cacher.go:171] Terminating all watchers from cacher alertrelabelconfigs.monitoring.openshift.io W0223 13:08:17.551324 9 cacher.go:171] Terminating all watchers from cacher prometheuses.monitoring.coreos.com W0223 13:08:17.551683 9 cacher.go:171] Terminating all watchers from cacher imagetagmirrorsets.config.openshift.io W0223 13:08:17.552060 9 cacher.go:171] Terminating all watchers from cacher machineosconfigs.machineconfiguration.openshift.io I0223 13:08:17.586446 1 main.go:235] Termination finished with exit code 0 I0223 13:08:17.586827 1 main.go:188] Deleting termination lock file "/var/log/kube-apiserver/.terminating" --- I0223 13:16:06.830216 1 main.go:213] Received signal terminated. Forwarding to sub-process "hyperkube". I0223 13:16:06.830515 14 genericapiserver.go:554] "[graceful-termination] shutdown event" name="ShutdownInitiated" I0223 13:16:06.830539 14 controller.go:128] Shutting down kubernetes service endpoint reconciler I0223 13:16:06.830584 14 patch_genericapiserver.go:97] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-master-0", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ShutdownInitiated' Received signal to terminate, becoming unready, but keeping serving I0223 13:16:07.134040 14 genericapiserver.go:544] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration" I0223 13:16:07.134099 14 patch_genericapiserver.go:97] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-master-0", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'AfterShutdownDelayDuration' The minimal shutdown duration of 0s finished I0223 13:16:07.400455 14 trace.go:236] Trace[1999689250]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:b8d385fc-442a-4161-8381-5bfbc810daec,client:10.128.0.8,api-group:,api-version:v1,name:v4-0-config-system-session,subresource:,namespace:openshift-authentication,protocol:HTTP/2.0,resource:secrets,scope:resource,url:/api/v1/namespaces/openshift-authentication/secrets/v4-0-config-system-session,user-agent:authentication-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:16:06.874) (total time: 526ms): Trace[1999689250]: ---"About to write a response" 526ms (13:16:07.400) Trace[1999689250]: [526.378957ms] [526.378957ms] END I0223 13:16:07.400797 14 trace.go:236] Trace[1769925683]: "Get" accept:application/json, */*,audit-id:97bc8ece-c00a-46ca-8cae-b2d501090a1a,client:10.128.0.25,api-group:config.openshift.io,api-version:v1,name:cluster,subresource:,namespace:,protocol:HTTP/2.0,resource:dnses,scope:resource,url:/apis/config.openshift.io/v1/dnses/cluster,user-agent:ingress-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:45.256) (total time: 22144ms): Trace[1769925683]: ---"About to write a response" 22144ms (13:16:07.400) Trace[1769925683]: [22.144517685s] [22.144517685s] END I0223 13:16:07.401537 14 trace.go:236] Trace[1682193706]: "List" accept:application/vnd.kubernetes.protobuf, */*,audit-id:7a33c3bd-7091-4768-9f91-1e8157626422,client:::1,api-group:,api-version:v1,name:,subresource:,namespace:openshift-kube-apiserver,protocol:HTTP/2.0,resource:limitranges,scope:namespace,url:/api/v1/namespaces/openshift-kube-apiserver/limitranges,user-agent:kube-apiserver/v1.31.14 (linux/amd64) kubernetes/8311c4d,verb:LIST (23-Feb-2026 13:16:06.861) (total time: 540ms): Trace[1682193706]: ["List(recursive=false) etcd3" audit-id:7a33c3bd-7091-4768-9f91-1e8157626422,key:/limitranges,resourceVersion:,resourceVersionMatch:,limit:1,continue: 540ms (13:16:06.861)] Trace[1682193706]: [540.184103ms] [540.184103ms] END W0223 13:16:07.402031 14 lease.go:265] Resetting endpoints for master service "kubernetes" to [] E0223 13:16:07.561060 14 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"context canceled\"}: context canceled" logger="UnhandledError" E0223 13:16:07.562441 14 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError" E0223 13:16:07.563620 14 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError" E0223 13:16:07.566694 14 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError" I0223 13:16:07.567869 14 trace.go:236] Trace[969083350]: "Get" accept:application/json, */*,audit-id:871d12ce-4f2f-483a-a76f-48be08c49827,client:10.128.0.77,api-group:coordination.k8s.io,api-version:v1,name:console-operator-lock,subresource:,namespace:openshift-console-operator,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/openshift-console-operator/leases/console-operator-lock,user-agent:console/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:38.423) (total time: 29144ms): Trace[969083350]: [29.144741094s] [29.144741094s] END E0223 13:16:07.568158 14 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="6.974895ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/openshift-console-operator/leases/console-operator-lock" result=null I0223 13:16:07.729758 14 trace.go:236] Trace[125994334]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:5bc74cf1-acc8-4601-9d01-505cd48e16ad,client:192.168.32.10,api-group:,api-version:v1,name:,subresource:,namespace:openshift-kube-apiserver,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/openshift-kube-apiserver/pods,user-agent:kubelet/v1.31.14 (linux/amd64) kubernetes/8311c4d,verb:POST (23-Feb-2026 13:16:06.859) (total time: 869ms): Trace[125994334]: [869.877136ms] [869.877136ms] END I0223 13:16:07.893874 14 trace.go:236] Trace[667155646]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:31a863e8-dfbd-48e1-9ee1-a8ed0489b854,client:::1,api-group:coordination.k8s.io,api-version:v1,name:cert-regeneration-controller-lock,subresource:,namespace:openshift-kube-apiserver,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/openshift-kube-apiserver/leases/cert-regeneration-controller-lock,user-agent:cluster-kube-apiserver-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (23-Feb-2026 13:15:33.893) (total time: 34000ms): Trace[667155646]: ["GuaranteedUpdate etcd3" audit-id:31a863e8-dfbd-48e1-9ee1-a8ed0489b854,key:/leases/openshift-kube-apiserver/cert-regeneration-controller-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 34000ms (13:15:33.893) Trace[667155646]: ---"Txn call failed" err:context deadline exceeded 33998ms (13:16:07.893)] Trace[667155646]: [34.000314555s] [34.000314555s] END E0223 13:16:07.893897 14 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 6.6µs, panicked: false, err: context deadline exceeded, panic-reason: " logger="UnhandledError" E0223 13:16:07.965792 14 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 11.761µs, panicked: false, err: context deadline exceeded, panic-reason: " logger="UnhandledError" I0223 13:16:07.965855 14 trace.go:236] Trace[1045766959]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:c39436a5-42dc-4dbd-8bcf-9ce0edd6951b,client:10.128.0.21,api-group:coordination.k8s.io,api-version:v1,name:openshift-cluster-etcd-operator-lock,subresource:,namespace:openshift-etcd-operator,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/openshift-etcd-operator/leases/openshift-cluster-etcd-operator-lock,user-agent:cluster-etcd-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (23-Feb-2026 13:15:33.965) (total time: 34000ms): Trace[1045766959]: ["GuaranteedUpdate etcd3" audit-id:c39436a5-42dc-4dbd-8bcf-9ce0edd6951b,key:/leases/openshift-etcd-operator/openshift-cluster-etcd-operator-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 34000ms (13:15:33.965) Trace[1045766959]: ---"Txn call failed" err:context deadline exceeded 33998ms (13:16:07.965)] Trace[1045766959]: [34.000245643s] [34.000245643s] END E0223 13:16:07.972787 14 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"context canceled\"}: context canceled" logger="UnhandledError" E0223 13:16:07.973061 14 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"context canceled\"}: context canceled" logger="UnhandledError" E0223 13:16:07.973904 14 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError" E0223 13:16:07.974911 14 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError" E0223 13:16:07.974948 14 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError" E0223 13:16:07.975985 14 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError" E0223 13:16:07.976006 14 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError" E0223 13:16:07.977109 14 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError" E0223 13:16:07.977301 14 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.487805ms" method="GET" path="/api/v1/namespaces/openshift-etcd/configmaps/etcd-all-bundles" result=null E0223 13:16:07.978336 14 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="5.211285ms" method="GET" path="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:etcd-backup-crb" result=null I0223 13:16:08.089877 14 genericapiserver.go:721] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped" I0223 13:16:08.089921 14 patch_genericapiserver.go:97] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-master-0", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationPreShutdownHooksFinished' All pre-shutdown hooks have been finished I0223 13:16:08.089987 14 genericapiserver.go:633] "[graceful-termination] shutdown event" name="NotAcceptingNewRequest" I0223 13:16:08.090027 14 patch_genericapiserver.go:97] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-kube-apiserver", Name:"kube-apiserver-master-0", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'InFlightRequestsDrained' All non long-running request(s) in-flight have drained I0223 13:16:08.090199 14 genericapiserver.go:679] "[graceful-termination] not going to wait for active watch request(s) to drain" E0223 13:16:08.131159 14 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError" E0223 13:16:08.132529 14 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError" E0223 13:16:08.133685 14 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError" E0223 13:16:08.134824 14 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError" I0223 13:16:08.135984 14 trace.go:236] Trace[873711664]: "Get" accept:application/json, */*,audit-id:8c67063f-4cad-423c-b948-d9e0d981bf2a,client:10.128.0.33,api-group:coordination.k8s.io,api-version:v1,name:9c4404e7.operatorframework.io,subresource:,namespace:openshift-operator-controller,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/openshift-operator-controller/leases/9c4404e7.operatorframework.io,user-agent:manager/v0.0.0 (linux/amd64) kubernetes/$Format/leader-election,verb:GET (23-Feb-2026 13:15:08.131) (total time: 60004ms): Trace[873711664]: [1m0.004556911s] [1m0.004556911s] END E0223 13:16:08.136185 14 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.924238ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/openshift-operator-controller/leases/9c4404e7.operatorframework.io" result=null I0223 13:16:08.151336 14 trace.go:236] Trace[1652642515]: "Get" accept:application/json, */*,audit-id:fcb1edb5-63eb-48a9-beb8-cc469743ff62,client:10.128.0.25,api-group:config.openshift.io,api-version:v1,name:cluster,subresource:,namespace:,protocol:HTTP/2.0,resource:infrastructures,scope:resource,url:/apis/config.openshift.io/v1/infrastructures/cluster,user-agent:ingress-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:16:07.403) (total time: 747ms): Trace[1652642515]: ---"About to write a response" 747ms (13:16:08.151) Trace[1652642515]: [747.444101ms] [747.444101ms] END I0223 13:16:08.151551 14 trace.go:236] Trace[1431436794]: "Get" accept:application/json, */*,audit-id:7a89d6c2-290d-4d82-b12b-aac7f96e7075,client:10.128.0.14,api-group:config.openshift.io,api-version:v1,name:cluster,subresource:,namespace:,protocol:HTTP/2.0,resource:infrastructures,scope:resource,url:/apis/config.openshift.io/v1/infrastructures/cluster,user-agent:operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:45.246) (total time: 22904ms): Trace[1431436794]: ---"About to write a response" 22904ms (13:16:08.151) Trace[1431436794]: [22.904639879s] [22.904639879s] END I0223 13:16:08.151594 14 trace.go:236] Trace[371786264]: "Get" accept:application/json, */*,audit-id:131d918e-e0a3-42fc-904f-0bd4c787b9d5,client:10.128.0.16,api-group:config.openshift.io,api-version:v1,name:cluster,subresource:,namespace:,protocol:HTTP/2.0,resource:infrastructures,scope:resource,url:/apis/config.openshift.io/v1/infrastructures/cluster,user-agent:cluster-image-registry-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:16:01.186) (total time: 6965ms): Trace[371786264]: ---"About to write a response" 6965ms (13:16:08.151) Trace[371786264]: [6.965247076s] [6.965247076s] END I0223 13:16:08.151760 14 trace.go:236] Trace[1852537467]: "Get" accept:application/json, */*,audit-id:71aa6c2e-8a97-46c6-9236-ac5e8c8e56e3,client:10.128.0.56,api-group:config.openshift.io,api-version:v1,name:cluster,subresource:,namespace:,protocol:HTTP/2.0,resource:infrastructures,scope:resource,url:/apis/config.openshift.io/v1/infrastructures/cluster,user-agent:cluster-baremetal-operator/v0.0.0 (linux/amd64) kubernetes/$Format/cluster-baremetal-operator,verb:GET (23-Feb-2026 13:15:11.112) (total time: 57038ms): Trace[1852537467]: ---"About to write a response" 57038ms (13:16:08.151) Trace[1852537467]: [57.038983872s] [57.038983872s] END I0223 13:16:08.151982 14 trace.go:236] Trace[72919851]: "Get" accept:application/json, */*,audit-id:f786a096-bf44-4838-a178-fd7c1206aea5,client:10.128.0.19,api-group:config.openshift.io,api-version:v1,name:cluster,subresource:,namespace:,protocol:HTTP/2.0,resource:infrastructures,scope:resource,url:/apis/config.openshift.io/v1/infrastructures/cluster,user-agent:cluster-node-tuning-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:16:01.110) (total time: 7041ms): Trace[72919851]: ---"About to write a response" 7041ms (13:16:08.151) Trace[72919851]: [7.041656457s] [7.041656457s] END I0223 13:16:08.203167 14 trace.go:236] Trace[409256446]: "Get" accept:application/json, */*,audit-id:d60f6f5c-0d25-4ddb-ac8c-874f36dfef9e,client:10.128.0.20,api-group:operator.openshift.io,api-version:v1,name:default,subresource:,namespace:,protocol:HTTP/2.0,resource:dnses,scope:resource,url:/apis/operator.openshift.io/v1/dnses/default,user-agent:dns-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:51.700) (total time: 16502ms): Trace[409256446]: ---"About to write a response" 16502ms (13:16:08.202) Trace[409256446]: [16.502298003s] [16.502298003s] END E0223 13:16:08.413851 14 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"context canceled\"}: context canceled" logger="UnhandledError" E0223 13:16:08.415644 14 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError" E0223 13:16:08.416798 14 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError" E0223 13:16:08.417959 14 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError" I0223 13:16:08.419645 14 trace.go:236] Trace[1552132544]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:ce83d1bd-580f-4f75-8ef8-f77542d64b6c,client:::1,api-group:coordination.k8s.io,api-version:v1,name:cert-recovery-controller-lock,subresource:,namespace:openshift-kube-controller-manager,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cert-recovery-controller-lock,user-agent:cluster-kube-controller-manager-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:16.423) (total time: 51995ms): Trace[1552132544]: [51.995748983s] [51.995748983s] END E0223 13:16:08.419849 14 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="5.934205ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cert-recovery-controller-lock" result=null I0223 13:16:08.665939 14 trace.go:236] Trace[1870934543]: "Get" accept:application/json, */*,audit-id:d381e845-442d-4da7-a5e8-533006b7246e,client:10.128.0.61,api-group:admissionregistration.k8s.io,api-version:v1,name:mcn-guards,subresource:,namespace:,protocol:HTTP/2.0,resource:validatingadmissionpolicies,scope:resource,url:/apis/admissionregistration.k8s.io/v1/validatingadmissionpolicies/mcn-guards,user-agent:machine-config-operator/v0.0.0 (linux/amd64) kubernetes/$Format/machine-config,verb:GET (23-Feb-2026 13:16:00.500) (total time: 8165ms): Trace[1870934543]: ---"About to write a response" 8164ms (13:16:08.665) Trace[1870934543]: [8.165093682s] [8.165093682s] END E0223 13:16:11.404625 14 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError" E0223 13:16:11.405878 14 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError" E0223 13:16:11.411178 14 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError" E0223 13:16:11.412991 14 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError" I0223 13:16:11.414267 14 trace.go:236] Trace[282247105]: "Get" accept:application/vnd.kubernetes.protobuf, */*,audit-id:585c9e29-3594-4afc-add2-54a1c4f4afa3,client:192.168.32.10,api-group:coordination.k8s.io,api-version:v1,name:kube-controller-manager,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager,user-agent:kube-controller-manager/v1.31.14 (linux/amd64) kubernetes/8311c4d/leader-election,verb:GET (23-Feb-2026 13:16:05.403) (total time: 6010ms): Trace[282247105]: [6.010205267s] [6.010205267s] END E0223 13:16:11.414535 14 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="9.67411ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager" result=null E0223 13:16:11.650286 14 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError" E0223 13:16:11.651501 14 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError" E0223 13:16:11.652692 14 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError" E0223 13:16:11.653912 14 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError" I0223 13:16:11.655117 14 trace.go:236] Trace[147624825]: "Get" accept:application/vnd.kubernetes.protobuf, */*,audit-id:20088232-bb35-4cae-8033-19a39c851c08,client:::1,api-group:coordination.k8s.io,api-version:v1,name:kube-scheduler,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler,user-agent:kube-scheduler/v1.31.14 (linux/amd64) kubernetes/8311c4d/leader-election,verb:GET (23-Feb-2026 13:16:06.649) (total time: 5005ms): Trace[147624825]: [5.005515462s] [5.005515462s] END E0223 13:16:11.655347 14 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.849455ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler" result=null E0223 13:16:11.974310 14 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"context canceled\"}: context canceled" logger="UnhandledError" E0223 13:16:11.975879 14 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError" E0223 13:16:11.977126 14 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError" E0223 13:16:11.978346 14 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError" I0223 13:16:11.979631 14 trace.go:236] Trace[1153913691]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:c6f8f194-70dd-4e9e-9562-80290635c3d7,client:192.168.32.10,api-group:coordination.k8s.io,api-version:v1,name:network-operator-lock,subresource:,namespace:openshift-network-operator,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/openshift-network-operator/leases/network-operator-lock,user-agent:cluster-network-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:19.988) (total time: 51991ms): Trace[1153913691]: [51.991339132s] [51.991339132s] END E0223 13:16:11.979963 14 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="5.587516ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/openshift-network-operator/leases/network-operator-lock" result=null E0223 13:16:12.575860 14 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError" E0223 13:16:12.576987 14 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError" E0223 13:16:12.578150 14 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError" E0223 13:16:12.579275 14 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError" I0223 13:16:12.580446 14 trace.go:236] Trace[529422752]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:cbc82e8d-6804-4bcf-abb4-9ffd6fcfeb3d,client:192.168.32.10,api-group:coordination.k8s.io,api-version:v1,name:master-0,subresource:,namespace:kube-node-lease,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0,user-agent:kubelet/v1.31.14 (linux/amd64) kubernetes/8311c4d,verb:GET (23-Feb-2026 13:16:02.576) (total time: 10004ms): Trace[529422752]: [10.004265674s] [10.004265674s] END E0223 13:16:12.580688 14 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.704831ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0" result=null I0223 13:16:14.711088 14 trace.go:236] Trace[1131307080]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:b85c3943-06bc-4e2c-8500-ab9ff6e90191,client:192.168.32.10,api-group:apps,api-version:v1,name:cluster-cloud-controller-manager,subresource:,namespace:openshift-cloud-controller-manager-operator,protocol:HTTP/2.0,resource:deployments,scope:resource,url:/apis/apps/v1/namespaces/openshift-cloud-controller-manager-operator/deployments/cluster-cloud-controller-manager,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:16:01.691) (total time: 13019ms): Trace[1131307080]: [13.019693117s] [13.019693117s] END I0223 13:16:14.711487 14 trace.go:236] Trace[1711417487]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:095d8fc4-62ba-40c3-ab4e-4806a4268582,client:192.168.32.10,api-group:apps,api-version:v1,name:control-plane-machine-set-operator,subresource:,namespace:openshift-machine-api,protocol:HTTP/2.0,resource:deployments,scope:resource,url:/apis/apps/v1/namespaces/openshift-machine-api/deployments/control-plane-machine-set-operator,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:59.565) (total time: 15145ms): Trace[1711417487]: ---"About to write a response" 15145ms (13:16:14.711) Trace[1711417487]: [15.145661667s] [15.145661667s] END I0223 13:16:14.712148 14 trace.go:236] Trace[760905354]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:c4d60aa4-1161-4178-b1aa-3a811fd699da,client:10.128.0.5,api-group:apps,api-version:v1,name:service-ca,subresource:,namespace:openshift-service-ca,protocol:HTTP/2.0,resource:deployments,scope:resource,url:/apis/apps/v1/namespaces/openshift-service-ca/deployments/service-ca,user-agent:service-ca-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:16:00.803) (total time: 13908ms): Trace[760905354]: ---"About to write a response" 13908ms (13:16:14.712) Trace[760905354]: [13.908746887s] [13.908746887s] END I0223 13:16:14.713439 14 trace.go:236] Trace[1552467138]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:36636f16-781c-4127-a153-99f0f7a16ec9,client:10.128.0.8,api-group:apps,api-version:v1,name:apiserver,subresource:,namespace:openshift-oauth-apiserver,protocol:HTTP/2.0,resource:deployments,scope:resource,url:/apis/apps/v1/namespaces/openshift-oauth-apiserver/deployments/apiserver,user-agent:authentication-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:16:01.876) (total time: 12836ms): Trace[1552467138]: ---"About to write a response" 12835ms (13:16:14.711) Trace[1552467138]: [12.836589531s] [12.836589531s] END I0223 13:16:14.713699 14 trace.go:236] Trace[289320457]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:085fe1b7-54c4-42fe-8b98-5219503bfa22,client:10.128.0.17,api-group:apps,api-version:v1,name:migrator,subresource:,namespace:openshift-kube-storage-version-migrator,protocol:HTTP/2.0,resource:deployments,scope:resource,url:/apis/apps/v1/namespaces/openshift-kube-storage-version-migrator/deployments/migrator,user-agent:cluster-kube-storage-version-migrator-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:31.183) (total time: 43529ms): Trace[289320457]: ---"About to write a response" 43528ms (13:16:14.712) Trace[289320457]: [43.529855554s] [43.529855554s] END I0223 13:16:14.713895 14 trace.go:236] Trace[1084541753]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:d69cccb7-dcbb-42d7-acf0-98999aee2c97,client:10.128.0.8,api-group:apps,api-version:v1,name:oauth-openshift,subresource:,namespace:openshift-authentication,protocol:HTTP/2.0,resource:deployments,scope:resource,url:/apis/apps/v1/namespaces/openshift-authentication/deployments/oauth-openshift,user-agent:authentication-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:31.208) (total time: 43505ms): Trace[1084541753]: ---"About to write a response" 43504ms (13:16:14.712) Trace[1084541753]: [43.505211987s] [43.505211987s] END I0223 13:16:14.713895 14 trace.go:236] Trace[1460655345]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:11f968e0-9d6e-4284-89b6-5497dfcc32d5,client:192.168.32.10,api-group:apps,api-version:v1,name:machine-approver,subresource:,namespace:openshift-cluster-machine-approver,protocol:HTTP/2.0,resource:deployments,scope:resource,url:/apis/apps/v1/namespaces/openshift-cluster-machine-approver/deployments/machine-approver,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:16:01.042) (total time: 13671ms): Trace[1460655345]: ---"About to write a response" 13669ms (13:16:14.711) Trace[1460655345]: [13.671686227s] [13.671686227s] END I0223 13:16:14.714092 14 trace.go:236] Trace[374535174]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:d569913a-8cf6-4133-9b34-7efbcbdf944f,client:192.168.32.10,api-group:apps,api-version:v1,name:authentication-operator,subresource:,namespace:openshift-authentication-operator,protocol:HTTP/2.0,resource:deployments,scope:resource,url:/apis/apps/v1/namespaces/openshift-authentication-operator/deployments/authentication-operator,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:58.466) (total time: 16247ms): Trace[374535174]: ---"About to write a response" 16246ms (13:16:14.712) Trace[374535174]: [16.247732086s] [16.247732086s] END I0223 13:16:14.714126 14 trace.go:236] Trace[377861967]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:99ff5b7a-8424-4a04-8b6b-7e039b5c5a89,client:192.168.32.10,api-group:apps,api-version:v1,name:etcd-operator,subresource:,namespace:openshift-etcd-operator,protocol:HTTP/2.0,resource:deployments,scope:resource,url:/apis/apps/v1/namespaces/openshift-etcd-operator/deployments/etcd-operator,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:16:01.065) (total time: 13648ms): Trace[377861967]: ---"About to write a response" 13646ms (13:16:14.712) Trace[377861967]: [13.648332036s] [13.648332036s] END I0223 13:16:14.714385 14 trace.go:236] Trace[1620622826]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:d465731a-aaa2-41d1-ae9c-44a59365d63f,client:192.168.32.10,api-group:apps,api-version:v1,name:openshift-apiserver-operator,subresource:,namespace:openshift-apiserver-operator,protocol:HTTP/2.0,resource:deployments,scope:resource,url:/apis/apps/v1/namespaces/openshift-apiserver-operator/deployments/openshift-apiserver-operator,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:58.290) (total time: 16423ms): Trace[1620622826]: ---"About to write a response" 16422ms (13:16:14.713) Trace[1620622826]: [16.42357933s] [16.42357933s] END I0223 13:16:14.714403 14 trace.go:236] Trace[1668455926]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:6821aeed-f63a-46f6-be7c-07f70892c589,client:192.168.32.10,api-group:apps,api-version:v1,name:kube-apiserver-operator,subresource:,namespace:openshift-kube-apiserver-operator,protocol:HTTP/2.0,resource:deployments,scope:resource,url:/apis/apps/v1/namespaces/openshift-kube-apiserver-operator/deployments/kube-apiserver-operator,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:16:00.890) (total time: 13823ms): Trace[1668455926]: ---"About to write a response" 13821ms (13:16:14.711) Trace[1668455926]: [13.82387556s] [13.82387556s] END I0223 13:16:14.714571 14 trace.go:236] Trace[1112028309]: "Delete" accept:application/json, */*,audit-id:8dc262e5-5564-49f9-851c-46036fe85a68,client:10.128.0.13,api-group:apps,api-version:v1,name:csi-snapshot-webhook,subresource:,namespace:openshift-cluster-storage-operator,protocol:HTTP/2.0,resource:deployments,scope:resource,url:/apis/apps/v1/namespaces/openshift-cluster-storage-operator/deployments/csi-snapshot-webhook,user-agent:csi-snapshot-controller-operator/v0.0.0 (linux/amd64) kubernetes/$Format/csi-snapshot-controller,verb:DELETE (23-Feb-2026 13:15:42.375) (total time: 32339ms): Trace[1112028309]: [32.339142246s] [32.339142246s] END I0223 13:16:14.714616 14 trace.go:236] Trace[984632531]: "Get" accept:application/json, */*,audit-id:24c259f7-1701-474e-8718-ae86b606d43c,client:10.128.0.13,api-group:apps,api-version:v1,name:csi-snapshot-controller,subresource:,namespace:openshift-cluster-storage-operator,protocol:HTTP/2.0,resource:deployments,scope:resource,url:/apis/apps/v1/namespaces/openshift-cluster-storage-operator/deployments/csi-snapshot-controller,user-agent:csi-snapshot-controller-operator/v0.0.0 (linux/amd64) kubernetes/$Format/csi-snapshot-controller,verb:GET (23-Feb-2026 13:15:52.445) (total time: 22269ms): Trace[984632531]: ---"About to write a response" 22267ms (13:16:14.712) Trace[984632531]: [22.269169618s] [22.269169618s] END I0223 13:16:14.716498 14 trace.go:236] Trace[2071009055]: "Patch" accept:application/json,audit-id:d023a1ae-2774-4fbd-b285-887070081cbc,client:192.168.32.10,api-group:apps,api-version:v1,name:ovnkube-control-plane,subresource:,namespace:openshift-ovn-kubernetes,protocol:HTTP/2.0,resource:deployments,scope:resource,url:/apis/apps/v1/namespaces/openshift-ovn-kubernetes/deployments/ovnkube-control-plane,user-agent:network-operator/4.18.0-202601302238.p2.gf63a7ff.assembly.stream.el9-f63a7ff,verb:APPLY (23-Feb-2026 13:15:59.668) (total time: 15047ms): Trace[2071009055]: ["GuaranteedUpdate etcd3" audit-id:d023a1ae-2774-4fbd-b285-887070081cbc,key:/deployments/openshift-ovn-kubernetes/ovnkube-control-plane,type:*apps.Deployment,resource:deployments.apps 15047ms (13:15:59.668)] Trace[2071009055]: ---"Object stored in database" 15041ms (13:16:14.715) Trace[2071009055]: [15.04791178s] [15.04791178s] END I0223 13:16:14.716586 14 trace.go:236] Trace[1473889428]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:b7a1bc9f-1247-42f2-ba4b-4ca6b5ab5e96,client:10.128.0.9,api-group:apps,api-version:v1,name:apiserver,subresource:,namespace:openshift-apiserver,protocol:HTTP/2.0,resource:deployments,scope:resource,url:/apis/apps/v1/namespaces/openshift-apiserver/deployments/apiserver,user-agent:cluster-openshift-apiserver-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:16:00.408) (total time: 14307ms): Trace[1473889428]: ---"About to write a response" 14306ms (13:16:14.715) Trace[1473889428]: [14.307571257s] [14.307571257s] END I0223 13:16:14.716849 14 trace.go:236] Trace[899162684]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:3673963f-f366-4194-bfcd-bb650d556fab,client:192.168.32.10,api-group:apps,api-version:v1,name:openshift-kube-scheduler-operator,subresource:,namespace:openshift-kube-scheduler-operator,protocol:HTTP/2.0,resource:deployments,scope:resource,url:/apis/apps/v1/namespaces/openshift-kube-scheduler-operator/deployments/openshift-kube-scheduler-operator,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:57.988) (total time: 16728ms): Trace[899162684]: ---"About to write a response" 16727ms (13:16:14.715) Trace[899162684]: [16.728388589s] [16.728388589s] END I0223 13:16:14.716910 14 trace.go:236] Trace[1414490621]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:36773b63-b159-4abc-a200-a0cd774d066f,client:192.168.32.10,api-group:apps,api-version:v1,name:marketplace-operator,subresource:,namespace:openshift-marketplace,protocol:HTTP/2.0,resource:deployments,scope:resource,url:/apis/apps/v1/namespaces/openshift-marketplace/deployments/marketplace-operator,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:16:00.141) (total time: 14575ms): Trace[1414490621]: ---"About to write a response" 14575ms (13:16:14.716) Trace[1414490621]: [14.575788266s] [14.575788266s] END I0223 13:16:14.717012 14 trace.go:236] Trace[1286086274]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:14fdcd83-926a-42cc-869a-35d63b2d8cb2,client:10.128.0.23,api-group:apps,api-version:v1,name:controller-manager,subresource:,namespace:openshift-controller-manager,protocol:HTTP/2.0,resource:deployments,scope:resource,url:/apis/apps/v1/namespaces/openshift-controller-manager/deployments/controller-manager,user-agent:cluster-openshift-controller-manager-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:16:00.484) (total time: 14232ms): Trace[1286086274]: ---"About to write a response" 14231ms (13:16:14.716) Trace[1286086274]: [14.232258897s] [14.232258897s] END I0223 13:16:14.717613 14 trace.go:236] Trace[493852411]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:9fd3d9dd-76e4-4380-a6c8-79bf5f05a6e0,client:192.168.32.10,api-group:apps,api-version:v1,name:openshift-controller-manager-operator,subresource:,namespace:openshift-controller-manager-operator,protocol:HTTP/2.0,resource:deployments,scope:resource,url:/apis/apps/v1/namespaces/openshift-controller-manager-operator/deployments/openshift-controller-manager-operator,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:57.957) (total time: 16759ms): Trace[493852411]: ---"About to write a response" 16758ms (13:16:14.716) Trace[493852411]: [16.759754583s] [16.759754583s] END I0223 13:16:14.717770 14 trace.go:236] Trace[1480324414]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:b6736d20-e3a0-4688-9b1f-fbf5eafda2dd,client:192.168.32.10,api-group:apps,api-version:v1,name:cluster-olm-operator,subresource:,namespace:openshift-cluster-olm-operator,protocol:HTTP/2.0,resource:deployments,scope:resource,url:/apis/apps/v1/namespaces/openshift-cluster-olm-operator/deployments/cluster-olm-operator,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:57.961) (total time: 16755ms): Trace[1480324414]: ---"About to write a response" 16754ms (13:16:14.716) Trace[1480324414]: [16.755715821s] [16.755715821s] END I0223 13:16:15.302558 14 trace.go:236] Trace[122639526]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:53905b5c-e219-40da-8465-8cd1854b9530,client:192.168.32.10,api-group:apiextensions.k8s.io,api-version:v1,name:insightsoperators.operator.openshift.io,subresource:,namespace:,protocol:HTTP/2.0,resource:customresourcedefinitions,scope:resource,url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/insightsoperators.operator.openshift.io,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:16:01.490) (total time: 13811ms): Trace[122639526]: ---"About to write a response" 13811ms (13:16:15.301) Trace[122639526]: [13.811801443s] [13.811801443s] END I0223 13:16:15.303098 14 trace.go:236] Trace[164205102]: "Get" accept:application/json, */*,audit-id:2c1a28ae-bf8e-4217-b11a-b28345b0db42,client:10.128.0.22,api-group:apiextensions.k8s.io,api-version:v1,name:apirequestcounts.apiserver.openshift.io,subresource:,namespace:,protocol:HTTP/2.0,resource:customresourcedefinitions,scope:resource,url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io,user-agent:cluster-kube-apiserver-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:58.632) (total time: 16670ms): Trace[164205102]: ---"About to write a response" 16670ms (13:16:15.302) Trace[164205102]: [16.670947877s] [16.670947877s] END I0223 13:16:15.304112 14 trace.go:236] Trace[1691978818]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:2692c292-69ac-43dd-b9cc-fcc5ae9665be,client:192.168.32.10,api-group:apiextensions.k8s.io,api-version:v1,name:containerruntimeconfigs.machineconfiguration.openshift.io,subresource:,namespace:,protocol:HTTP/2.0,resource:customresourcedefinitions,scope:resource,url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/containerruntimeconfigs.machineconfiguration.openshift.io,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:58.616) (total time: 16687ms): Trace[1691978818]: ---"About to write a response" 16687ms (13:16:15.303) Trace[1691978818]: [16.687872618s] [16.687872618s] END I0223 13:16:15.305080 14 trace.go:236] Trace[682343094]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:ae180517-0485-4639-8726-3451e45408b3,client:192.168.32.10,api-group:apiextensions.k8s.io,api-version:v1,name:builds.config.openshift.io,subresource:,namespace:,protocol:HTTP/2.0,resource:customresourcedefinitions,scope:resource,url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/builds.config.openshift.io,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:23.150) (total time: 52154ms): Trace[682343094]: ---"About to write a response" 52153ms (13:16:15.304) Trace[682343094]: [52.154490866s] [52.154490866s] END I0223 13:16:15.305418 14 trace.go:236] Trace[979139330]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:a873d72a-9a89-45cc-9f32-034720f7d3a3,client:192.168.32.10,api-group:apiextensions.k8s.io,api-version:v1,name:alertingrules.monitoring.openshift.io,subresource:,namespace:,protocol:HTTP/2.0,resource:customresourcedefinitions,scope:resource,url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/alertingrules.monitoring.openshift.io,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:32.183) (total time: 43121ms): Trace[979139330]: ---"About to write a response" 43121ms (13:16:15.304) Trace[979139330]: [43.121817933s] [43.121817933s] END I0223 13:16:15.305497 14 trace.go:236] Trace[1367270753]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:a790c28e-2360-4b4f-bb6d-dfb3b34644b7,client:192.168.32.10,api-group:apiextensions.k8s.io,api-version:v1,name:kubecontrollermanagers.operator.openshift.io,subresource:,namespace:,protocol:HTTP/2.0,resource:customresourcedefinitions,scope:resource,url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/kubecontrollermanagers.operator.openshift.io,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:23.139) (total time: 52166ms): Trace[1367270753]: ---"About to write a response" 52165ms (13:16:15.305) Trace[1367270753]: [52.166303845s] [52.166303845s] END I0223 13:16:15.305552 14 trace.go:236] Trace[730540704]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:6025f3fe-18de-4661-bb2d-18ba429c77bb,client:192.168.32.10,api-group:apiextensions.k8s.io,api-version:v1,name:credentialsrequests.cloudcredential.openshift.io,subresource:,namespace:,protocol:HTTP/2.0,resource:customresourcedefinitions,scope:resource,url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/credentialsrequests.cloudcredential.openshift.io,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:57.878) (total time: 17427ms): Trace[730540704]: ---"About to write a response" 17426ms (13:16:15.305) Trace[730540704]: [17.427320847s] [17.427320847s] END I0223 13:16:15.305726 14 trace.go:236] Trace[1480147052]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:20b58eb9-8786-4c63-b3f6-52e07a5b4914,client:192.168.32.10,api-group:apiextensions.k8s.io,api-version:v1,name:clusteroperators.config.openshift.io,subresource:,namespace:,protocol:HTTP/2.0,resource:customresourcedefinitions,scope:resource,url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusteroperators.config.openshift.io,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:57.899) (total time: 17406ms): Trace[1480147052]: ---"About to write a response" 17405ms (13:16:15.305) Trace[1480147052]: [17.406150907s] [17.406150907s] END I0223 13:16:15.306445 14 trace.go:236] Trace[1216901804]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:d30335a3-f0c8-4ae0-815c-3adb7f0d5b49,client:192.168.32.10,api-group:apiextensions.k8s.io,api-version:v1,name:dnsrecords.ingress.operator.openshift.io,subresource:,namespace:,protocol:HTTP/2.0,resource:customresourcedefinitions,scope:resource,url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/dnsrecords.ingress.operator.openshift.io,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:41.578) (total time: 33728ms): Trace[1216901804]: ---"About to write a response" 33727ms (13:16:15.306) Trace[1216901804]: [33.728027259s] [33.728027259s] END I0223 13:16:15.306733 14 trace.go:236] Trace[509386508]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:2f4a8255-fdfd-4af1-9dce-a4f6e2e6838d,client:192.168.32.10,api-group:apiextensions.k8s.io,api-version:v1,name:clusterautoscalers.autoscaling.openshift.io,subresource:,namespace:,protocol:HTTP/2.0,resource:customresourcedefinitions,scope:resource,url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterautoscalers.autoscaling.openshift.io,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:32.188) (total time: 43118ms): Trace[509386508]: ---"About to write a response" 43118ms (13:16:15.306) Trace[509386508]: [43.118406749s] [43.118406749s] END I0223 13:16:15.306803 14 trace.go:236] Trace[1100692941]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:27138381-a273-4493-a656-8635252c9d8c,client:192.168.32.10,api-group:apiextensions.k8s.io,api-version:v1,name:ipaddresses.ipam.cluster.x-k8s.io,subresource:,namespace:,protocol:HTTP/2.0,resource:customresourcedefinitions,scope:resource,url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/ipaddresses.ipam.cluster.x-k8s.io,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:23.144) (total time: 52162ms): Trace[1100692941]: ---"About to write a response" 52159ms (13:16:15.303) Trace[1100692941]: [52.162119499s] [52.162119499s] END I0223 13:16:15.307049 14 trace.go:236] Trace[1297398326]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:5cf0286f-fb91-4a14-85c5-8fc17114167e,client:192.168.32.10,api-group:apiextensions.k8s.io,api-version:v1,name:servicecas.operator.openshift.io,subresource:,namespace:,protocol:HTTP/2.0,resource:customresourcedefinitions,scope:resource,url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/servicecas.operator.openshift.io,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:57.881) (total time: 17425ms): Trace[1297398326]: ---"About to write a response" 17425ms (13:16:15.306) Trace[1297398326]: [17.42596351s] [17.42596351s] END I0223 13:16:15.307052 14 trace.go:236] Trace[1522457611]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:134c9ec9-397a-44a1-81c6-8bd88d673ae4,client:192.168.32.10,api-group:apiextensions.k8s.io,api-version:v1,name:machines.machine.openshift.io,subresource:,namespace:,protocol:HTTP/2.0,resource:customresourcedefinitions,scope:resource,url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/machines.machine.openshift.io,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:57.966) (total time: 17340ms): Trace[1522457611]: ---"About to write a response" 17339ms (13:16:15.306) Trace[1522457611]: [17.340478476s] [17.340478476s] END I0223 13:16:15.307518 14 trace.go:236] Trace[764889816]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:80e450be-02fb-4a3f-bb7c-eefe100cd178,client:192.168.32.10,api-group:apiextensions.k8s.io,api-version:v1,name:storageversionmigrations.migration.k8s.io,subresource:,namespace:,protocol:HTTP/2.0,resource:customresourcedefinitions,scope:resource,url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/storageversionmigrations.migration.k8s.io,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:32.186) (total time: 43121ms): Trace[764889816]: ---"About to write a response" 43121ms (13:16:15.307) Trace[764889816]: [43.121477604s] [43.121477604s] END I0223 13:16:15.307616 14 trace.go:236] Trace[2086058368]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:53d3be91-613f-4265-8084-57b4f9d1ffd9,client:192.168.32.10,api-group:apiextensions.k8s.io,api-version:v1,name:consoleclidownloads.console.openshift.io,subresource:,namespace:,protocol:HTTP/2.0,resource:customresourcedefinitions,scope:resource,url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/consoleclidownloads.console.openshift.io,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:32.187) (total time: 43120ms): Trace[2086058368]: ---"About to write a response" 43120ms (13:16:15.307) Trace[2086058368]: [43.120488717s] [43.120488717s] END I0223 13:16:15.307913 14 trace.go:236] Trace[55615854]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:86501201-8bcb-4cce-bc92-e145cbec7012,client:192.168.32.10,api-group:apiextensions.k8s.io,api-version:v1,name:openshiftapiservers.operator.openshift.io,subresource:,namespace:,protocol:HTTP/2.0,resource:customresourcedefinitions,scope:resource,url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/openshiftapiservers.operator.openshift.io,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:23.147) (total time: 52160ms): Trace[55615854]: ---"About to write a response" 52159ms (13:16:15.307) Trace[55615854]: [52.160143824s] [52.160143824s] END I0223 13:16:15.308152 14 trace.go:236] Trace[593767946]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:e0a64337-7867-46a8-9855-face68591a6a,client:192.168.32.10,api-group:apiextensions.k8s.io,api-version:v1,name:egressrouters.network.operator.openshift.io,subresource:,namespace:,protocol:HTTP/2.0,resource:customresourcedefinitions,scope:resource,url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/egressrouters.network.operator.openshift.io,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:57.896) (total time: 17411ms): Trace[593767946]: ---"About to write a response" 17407ms (13:16:15.303) Trace[593767946]: [17.411804025s] [17.411804025s] END I0223 13:16:15.308303 14 trace.go:236] Trace[1336134406]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:ea1bbfb5-da5f-477b-a1a0-c376591c1f17,client:192.168.32.10,api-group:apiextensions.k8s.io,api-version:v1,name:operatorhubs.config.openshift.io,subresource:,namespace:,protocol:HTTP/2.0,resource:customresourcedefinitions,scope:resource,url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/operatorhubs.config.openshift.io,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:23.144) (total time: 52163ms): Trace[1336134406]: ---"About to write a response" 52162ms (13:16:15.307) Trace[1336134406]: [52.163374314s] [52.163374314s] END I0223 13:16:15.308360 14 trace.go:236] Trace[943836351]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:72621bea-51cf-49dc-83e5-18c9cca3fa07,client:192.168.32.10,api-group:apiextensions.k8s.io,api-version:v1,name:apiservers.config.openshift.io,subresource:,namespace:,protocol:HTTP/2.0,resource:customresourcedefinitions,scope:resource,url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apiservers.config.openshift.io,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:32.187) (total time: 43120ms): Trace[943836351]: ---"About to write a response" 43120ms (13:16:15.307) Trace[943836351]: [43.120901549s] [43.120901549s] END I0223 13:16:15.308410 14 trace.go:236] Trace[1795101566]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:c2f7433a-ce83-4777-9502-f5abd3a8e9be,client:192.168.32.10,api-group:apiextensions.k8s.io,api-version:v1,name:authentications.operator.openshift.io,subresource:,namespace:,protocol:HTTP/2.0,resource:customresourcedefinitions,scope:resource,url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/authentications.operator.openshift.io,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:23.144) (total time: 52163ms): Trace[1795101566]: ---"About to write a response" 52162ms (13:16:15.307) Trace[1795101566]: [52.163398555s] [52.163398555s] END I0223 13:16:15.308524 14 trace.go:236] Trace[1166161113]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:752b75c1-16c2-4219-912c-e2c1ac854542,client:192.168.32.10,api-group:apiextensions.k8s.io,api-version:v1,name:csisnapshotcontrollers.operator.openshift.io,subresource:,namespace:,protocol:HTTP/2.0,resource:customresourcedefinitions,scope:resource,url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/csisnapshotcontrollers.operator.openshift.io,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:32.185) (total time: 43122ms): Trace[1166161113]: ---"About to write a response" 43121ms (13:16:15.307) Trace[1166161113]: [43.122485352s] [43.122485352s] END I0223 13:16:15.308989 14 trace.go:236] Trace[1184104093]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:ee9a962c-97e2-4e9b-a680-f655d250bb46,client:192.168.32.10,api-group:apiextensions.k8s.io,api-version:v1,name:kubestorageversionmigrators.operator.openshift.io,subresource:,namespace:,protocol:HTTP/2.0,resource:customresourcedefinitions,scope:resource,url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/kubestorageversionmigrators.operator.openshift.io,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:23.139) (total time: 52169ms): Trace[1184104093]: ---"About to write a response" 52168ms (13:16:15.308) Trace[1184104093]: [52.169136625s] [52.169136625s] END I0223 13:16:15.309128 14 trace.go:236] Trace[1397905879]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:453accf0-cca0-427a-b6da-38cc8fe72ad5,client:192.168.32.10,api-group:apiextensions.k8s.io,api-version:v1,name:openshiftcontrollermanagers.operator.openshift.io,subresource:,namespace:,protocol:HTTP/2.0,resource:customresourcedefinitions,scope:resource,url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/openshiftcontrollermanagers.operator.openshift.io,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:23.122) (total time: 52186ms): Trace[1397905879]: ---"About to write a response" 52185ms (13:16:15.308) Trace[1397905879]: [52.186260302s] [52.186260302s] END I0223 13:16:15.309258 14 trace.go:236] Trace[1803479564]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:607e2420-a0af-400c-a1e3-c8fb0517beab,client:192.168.32.10,api-group:apiextensions.k8s.io,api-version:v1,name:dnses.operator.openshift.io,subresource:,namespace:,protocol:HTTP/2.0,resource:customresourcedefinitions,scope:resource,url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/dnses.operator.openshift.io,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:57.880) (total time: 17428ms): Trace[1803479564]: ---"About to write a response" 17428ms (13:16:15.308) Trace[1803479564]: [17.428986064s] [17.428986064s] END I0223 13:16:15.309337 14 trace.go:236] Trace[1841859056]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:b1e47d66-a9a2-4e60-9619-b43213237b42,client:192.168.32.10,api-group:apiextensions.k8s.io,api-version:v1,name:olms.operator.openshift.io,subresource:,namespace:,protocol:HTTP/2.0,resource:customresourcedefinitions,scope:resource,url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/olms.operator.openshift.io,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:23.150) (total time: 52158ms): Trace[1841859056]: ---"About to write a response" 52158ms (13:16:15.308) Trace[1841859056]: [52.158883799s] [52.158883799s] END I0223 13:16:15.309354 14 trace.go:236] Trace[1238872985]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:1e83a731-f887-4bc4-bb87-47858d043d1b,client:192.168.32.10,api-group:apiextensions.k8s.io,api-version:v1,name:kubeapiservers.operator.openshift.io,subresource:,namespace:,protocol:HTTP/2.0,resource:customresourcedefinitions,scope:resource,url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/kubeapiservers.operator.openshift.io,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:23.139) (total time: 52170ms): Trace[1238872985]: ---"About to write a response" 52169ms (13:16:15.308) Trace[1238872985]: [52.170148793s] [52.170148793s] END I0223 13:16:15.309445 14 trace.go:236] Trace[1730131611]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:77d46ba0-32cb-47cd-806a-d3f9ae4dd8bb,client:192.168.32.10,api-group:apiextensions.k8s.io,api-version:v1,name:storages.operator.openshift.io,subresource:,namespace:,protocol:HTTP/2.0,resource:customresourcedefinitions,scope:resource,url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/storages.operator.openshift.io,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:23.143) (total time: 52166ms): Trace[1730131611]: ---"About to write a response" 52165ms (13:16:15.309) Trace[1730131611]: [52.16612321s] [52.16612321s] END I0223 13:16:15.309477 14 trace.go:236] Trace[1615937904]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:24e6b9f1-d27c-44e2-baae-caca3d6714aa,client:192.168.32.10,api-group:apiextensions.k8s.io,api-version:v1,name:configs.samples.operator.openshift.io,subresource:,namespace:,protocol:HTTP/2.0,resource:customresourcedefinitions,scope:resource,url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/configs.samples.operator.openshift.io,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:32.183) (total time: 43125ms): Trace[1615937904]: ---"About to write a response" 43125ms (13:16:15.309) Trace[1615937904]: [43.125546578s] [43.125546578s] END I0223 13:16:15.310335 14 trace.go:236] Trace[2084611506]: "Get" accept:application/json, */*,audit-id:97eff844-63eb-4f75-98d9-4bff04813e4b,client:10.128.0.13,api-group:apiextensions.k8s.io,api-version:v1,name:volumesnapshots.snapshot.storage.k8s.io,subresource:,namespace:,protocol:HTTP/2.0,resource:customresourcedefinitions,scope:resource,url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/volumesnapshots.snapshot.storage.k8s.io,user-agent:csi-snapshot-controller-operator/v0.0.0 (linux/amd64) kubernetes/$Format/csi-snapshot-controller,verb:GET (23-Feb-2026 13:15:18.429) (total time: 56880ms): Trace[2084611506]: ---"About to write a response" 56879ms (13:16:15.309) Trace[2084611506]: [56.880492692s] [56.880492692s] END I0223 13:16:15.310377 14 trace.go:236] Trace[1943163464]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:d7ca9062-582b-433f-9542-0a585ddfab3b,client:192.168.32.10,api-group:apiextensions.k8s.io,api-version:v1,name:baremetalhosts.metal3.io,subresource:,namespace:,protocol:HTTP/2.0,resource:customresourcedefinitions,scope:resource,url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/baremetalhosts.metal3.io,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:32.187) (total time: 43122ms): Trace[1943163464]: ---"About to write a response" 43121ms (13:16:15.309) Trace[1943163464]: [43.12239219s] [43.12239219s] END I0223 13:16:15.310547 14 trace.go:236] Trace[5726860]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:4d5dafe6-04f2-45da-9994-e46aa6254012,client:192.168.32.10,api-group:apiextensions.k8s.io,api-version:v1,name:clusterresourcequotas.quota.openshift.io,subresource:,namespace:,protocol:HTTP/2.0,resource:customresourcedefinitions,scope:resource,url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterresourcequotas.quota.openshift.io,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:23.139) (total time: 52171ms): Trace[5726860]: ---"About to write a response" 52170ms (13:16:15.310) Trace[5726860]: [52.171103359s] [52.171103359s] END I0223 13:16:15.310574 14 trace.go:236] Trace[1295743719]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:77fb1db2-0477-4964-9a65-d68fd4f90735,client:192.168.32.10,api-group:apiextensions.k8s.io,api-version:v1,name:etcds.operator.openshift.io,subresource:,namespace:,protocol:HTTP/2.0,resource:customresourcedefinitions,scope:resource,url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/etcds.operator.openshift.io,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:23.139) (total time: 52171ms): Trace[1295743719]: ---"About to write a response" 52170ms (13:16:15.310) Trace[1295743719]: [52.171061348s] [52.171061348s] END I0223 13:16:15.310900 14 trace.go:236] Trace[628850134]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:599fba84-3fbf-4ebb-a626-674b2309d417,client:192.168.32.10,api-group:apiextensions.k8s.io,api-version:v1,name:controlplanemachinesets.machine.openshift.io,subresource:,namespace:,protocol:HTTP/2.0,resource:customresourcedefinitions,scope:resource,url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/controlplanemachinesets.machine.openshift.io,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:23.150) (total time: 52160ms): Trace[628850134]: ---"About to write a response" 52160ms (13:16:15.310) Trace[628850134]: [52.160767211s] [52.160767211s] END I0223 13:16:15.310929 14 trace.go:236] Trace[1348849108]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:fb0e8b0c-486e-402e-bf65-c04ee8e17d0f,client:192.168.32.10,api-group:apiextensions.k8s.io,api-version:v1,name:performanceprofiles.performance.openshift.io,subresource:,namespace:,protocol:HTTP/2.0,resource:customresourcedefinitions,scope:resource,url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/performanceprofiles.performance.openshift.io,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:57.869) (total time: 17441ms): Trace[1348849108]: ---"About to write a response" 17440ms (13:16:15.310) Trace[1348849108]: [17.441261786s] [17.441261786s] END I0223 13:16:15.311146 14 trace.go:236] Trace[1427964372]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:6264389b-6be3-4e13-b85d-e0ec939728df,client:192.168.32.10,api-group:apiextensions.k8s.io,api-version:v1,name:clustercsidrivers.operator.openshift.io,subresource:,namespace:,protocol:HTTP/2.0,resource:customresourcedefinitions,scope:resource,url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clustercsidrivers.operator.openshift.io,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:23.144) (total time: 52166ms): Trace[1427964372]: ---"About to write a response" 52162ms (13:16:15.306) Trace[1427964372]: [52.166917933s] [52.166917933s] END I0223 13:16:15.312046 14 trace.go:236] Trace[1480487701]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:604a58ac-5b89-478f-b2b7-88167d97fd0a,client:192.168.32.10,api-group:apiextensions.k8s.io,api-version:v1,name:networks.operator.openshift.io,subresource:,namespace:,protocol:HTTP/2.0,resource:customresourcedefinitions,scope:resource,url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/networks.operator.openshift.io,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:23.150) (total time: 52161ms): Trace[1480487701]: ---"About to write a response" 52160ms (13:16:15.310) Trace[1480487701]: [52.161841801s] [52.161841801s] END I0223 13:16:15.312158 14 trace.go:236] Trace[2141526789]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:bc63de46-2a01-47b4-8674-38924cbc38b6,client:192.168.32.10,api-group:apiextensions.k8s.io,api-version:v1,name:catalogsources.operators.coreos.com,subresource:,namespace:,protocol:HTTP/2.0,resource:customresourcedefinitions,scope:resource,url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/catalogsources.operators.coreos.com,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:32.239) (total time: 43072ms): Trace[2141526789]: ---"About to write a response" 43070ms (13:16:15.309) Trace[2141526789]: [43.07255027s] [43.07255027s] END I0223 13:16:15.312892 14 trace.go:236] Trace[590327139]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:e64bb126-16c4-424a-a89f-290324619e5d,client:192.168.32.10,api-group:apiextensions.k8s.io,api-version:v1,name:cloudcredentials.operator.openshift.io,subresource:,namespace:,protocol:HTTP/2.0,resource:customresourcedefinitions,scope:resource,url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/cloudcredentials.operator.openshift.io,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:41.578) (total time: 33733ms): Trace[590327139]: ---"About to write a response" 33728ms (13:16:15.307) Trace[590327139]: [33.733871381s] [33.733871381s] END I0223 13:16:15.313099 14 trace.go:236] Trace[1383351688]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:ec6eaba1-18ae-48c0-a4c1-92a28dea7408,client:192.168.32.10,api-group:apiextensions.k8s.io,api-version:v1,name:kubeschedulers.operator.openshift.io,subresource:,namespace:,protocol:HTTP/2.0,resource:customresourcedefinitions,scope:resource,url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/kubeschedulers.operator.openshift.io,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:23.150) (total time: 52162ms): Trace[1383351688]: ---"About to write a response" 52153ms (13:16:15.304) Trace[1383351688]: [52.162423357s] [52.162423357s] END I0223 13:16:15.313105 14 trace.go:236] Trace[1748037328]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:d496f45c-3d59-40cf-a476-21fbb1ce6436,client:192.168.32.10,api-group:apiextensions.k8s.io,api-version:v1,name:configs.imageregistry.operator.openshift.io,subresource:,namespace:,protocol:HTTP/2.0,resource:customresourcedefinitions,scope:resource,url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/configs.imageregistry.operator.openshift.io,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:32.189) (total time: 43123ms): Trace[1748037328]: ---"About to write a response" 43121ms (13:16:15.311) Trace[1748037328]: [43.123209663s] [43.123209663s] END I0223 13:16:15.313303 14 trace.go:236] Trace[1367608771]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:a8e44903-a004-4588-b382-a182faa5b708,client:192.168.32.10,api-group:apiextensions.k8s.io,api-version:v1,name:consoles.operator.openshift.io,subresource:,namespace:,protocol:HTTP/2.0,resource:customresourcedefinitions,scope:resource,url:/apis/apiextensions.k8s.io/v1/customresourcedefinitions/consoles.operator.openshift.io,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:23.143) (total time: 52169ms): Trace[1367608771]: ---"About to write a response" 52168ms (13:16:15.312) Trace[1367608771]: [52.169576237s] [52.169576237s] END I0223 13:16:16.755521 14 trace.go:236] Trace[1498573542]: "Get" accept:application/json,audit-id:9f847a75-5289-4fcf-9ece-5a5f6226eb8c,client:192.168.32.10,api-group:operator.openshift.io,api-version:v1,name:cluster,subresource:,namespace:,protocol:HTTP/2.0,resource:storages,scope:resource,url:/apis/operator.openshift.io/v1/storages/cluster,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:58.590) (total time: 18164ms): Trace[1498573542]: ---"About to write a response" 18164ms (13:16:16.754) Trace[1498573542]: [18.164985246s] [18.164985246s] END E0223 13:16:18.897197 14 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded" logger="UnhandledError" E0223 13:16:18.898356 14 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError" E0223 13:16:18.899424 14 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError" E0223 13:16:18.900588 14 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError" I0223 13:16:18.901824 14 trace.go:236] Trace[674476277]: "Get" accept:application/json, */*,audit-id:53a5b2f8-8fb4-4b52-966b-59c4a9298f6f,client:10.128.0.34,api-group:coordination.k8s.io,api-version:v1,name:catalogd-operator-lock,subresource:,namespace:openshift-catalogd,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/openshift-catalogd/leases/catalogd-operator-lock,user-agent:manager/v0.0.0 (linux/amd64) kubernetes/$Format/leader-election,verb:GET (23-Feb-2026 13:15:18.898) (total time: 60003ms): Trace[674476277]: [1m0.003481891s] [1m0.003481891s] END E0223 13:16:18.902095 14 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.6618ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/openshift-catalogd/leases/catalogd-operator-lock" result=null I0223 13:16:18.927383 14 apf_controller.go:493] "Update CurrentCL" plName="exempt" seatDemandHighWatermark=3 seatDemandAvg=1.297217570496087 seatDemandStdev=0.5075878549758417 seatDemandSmoothed=11.760390342220045 fairFrac=2.1611379720161934 currentCL=3 concurrencyDenominator=3 backstop=false I0223 13:16:19.831341 1 main.go:175] Graceful termination time nearly passed and kube-apiserver has still not terminated. Deleting termination lock file "/var/log/kube-apiserver/.terminating" to avoid a false positive. I0223 13:16:19.832125 1 request.go:1351] Request Body: {"kind":"Event","apiVersion":"v1","metadata":{"name":"kube-apiserver-master-0.1896e2889f3755db","namespace":"openshift-kube-apiserver","creationTimestamp":null},"involvedObject":{"kind":"Pod","namespace":"openshift-kube-apiserver","name":"kube-apiserver-master-0","apiVersion":"v1"},"reason":"GracefulTerminationTimeout","message":"kube-apiserver did not terminate within 15s","source":{"component":"apiserver","host":"master-0"},"firstTimestamp":"2026-02-23T13:16:19Z","lastTimestamp":"2026-02-23T13:16:19Z","count":1,"type":"Warning","eventTime":null,"reportingComponent":"","reportingInstance":""} I0223 13:16:19.832350 1 round_trippers.go:466] curl -v -XPOST -H "Content-Type: application/json" -H "User-Agent: watch-termination/v1.31.14 (linux/amd64) kubernetes/8311c4d" -H "Accept: application/json, */*" -H "Authorization: Bearer " 'https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/events' I0223 13:16:19.833479 1 round_trippers.go:495] HTTP Trace: DNS Lookup for localhost resolved to [{::1 } {127.0.0.1 }] I0223 13:16:19.833831 1 round_trippers.go:510] HTTP Trace: Dial to tcp:[::1]:6443 succeed I0223 13:16:19.838313 1 round_trippers.go:553] POST https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/events 429 Too Many Requests in 5 milliseconds I0223 13:16:19.838364 1 round_trippers.go:570] HTTP Statistics: DNSLookup 0 ms Dial 0 ms TLSHandshake 3 ms ServerProcessing 0 ms Duration 5 ms I0223 13:16:19.838376 1 round_trippers.go:577] Response Headers: I0223 13:16:19.838390 1 round_trippers.go:580] Date: Mon, 23 Feb 2026 13:16:19 GMT I0223 13:16:19.838401 1 round_trippers.go:580] Audit-Id: 52448058-79f3-4d0d-a81d-6246e588d5c4 I0223 13:16:19.838417 1 round_trippers.go:580] Content-Type: text/plain; charset=utf-8 I0223 13:16:19.838425 1 round_trippers.go:580] Retry-After: 5 I0223 13:16:19.838432 1 round_trippers.go:580] X-Content-Type-Options: nosniff I0223 13:16:19.838441 1 round_trippers.go:580] Content-Length: 56 E0223 13:16:20.446746 14 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 6.051µs, panicked: false, err: context deadline exceeded, panic-reason: " logger="UnhandledError" I0223 13:16:20.446891 14 trace.go:236] Trace[420625591]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:cc1ea90f-b257-487e-88b0-ce68c7108ff4,client:10.128.0.60,api-group:coordination.k8s.io,api-version:v1,name:cluster-storage-operator-lock,subresource:,namespace:openshift-cluster-storage-operator,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-storage-operator/leases/cluster-storage-operator-lock,user-agent:cluster-storage-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (23-Feb-2026 13:15:46.446) (total time: 34000ms): Trace[420625591]: ["GuaranteedUpdate etcd3" audit-id:cc1ea90f-b257-487e-88b0-ce68c7108ff4,key:/leases/openshift-cluster-storage-operator/cluster-storage-operator-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 34000ms (13:15:46.446) Trace[420625591]: ---"Txn call failed" err:context deadline exceeded 33998ms (13:16:20.446)] Trace[420625591]: [34.000485555s] [34.000485555s] END E0223 13:16:21.060614 14 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"context canceled\"}: context canceled" logger="UnhandledError" E0223 13:16:21.061958 14 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError" E0223 13:16:21.063510 14 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError" E0223 13:16:21.064658 14 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError" I0223 13:16:21.065947 14 trace.go:236] Trace[1915821334]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:6b8ddd55-65de-47d6-bedd-6079de200f3f,client:10.128.0.8,api-group:coordination.k8s.io,api-version:v1,name:cluster-authentication-operator-lock,subresource:,namespace:openshift-authentication-operator,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/openshift-authentication-operator/leases/cluster-authentication-operator-lock,user-agent:authentication-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:29.070) (total time: 51994ms): Trace[1915821334]: [51.994896622s] [51.994896622s] END E0223 13:16:21.066311 14 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="5.553555ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/openshift-authentication-operator/leases/cluster-authentication-operator-lock" result=null I0223 13:16:21.228560 14 trace.go:236] Trace[1587554115]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:c6236854-3b8d-4a7f-a2b1-e6dac80174f5,client:192.168.32.10,api-group:coordination.k8s.io,api-version:v1,name:ovn-kubernetes-master,subresource:,namespace:openshift-ovn-kubernetes,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/openshift-ovn-kubernetes/leases/ovn-kubernetes-master,user-agent:master-0/ovnkube@392b470acd6e (linux/amd64) kubernetes/v0.33.3,verb:GET (23-Feb-2026 13:15:59.566) (total time: 21661ms): Trace[1587554115]: ---"About to write a response" 21661ms (13:16:21.228) Trace[1587554115]: [21.661671266s] [21.661671266s] END I0223 13:16:21.228692 14 trace.go:236] Trace[855740668]: "Update" accept:application/json, */*,audit-id:f055112e-75b5-4e8f-8c78-a3ad91c0d76b,client:10.128.0.72,api-group:coordination.k8s.io,api-version:v1,name:machine-config-controller,subresource:,namespace:openshift-machine-config-operator,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/openshift-machine-config-operator/leases/machine-config-controller,user-agent:machine-config-controller/v0.0.0 (linux/amd64) kubernetes/$Format/leader-election,verb:PUT (23-Feb-2026 13:15:59.960) (total time: 21267ms): Trace[855740668]: ["GuaranteedUpdate etcd3" audit-id:f055112e-75b5-4e8f-8c78-a3ad91c0d76b,key:/leases/openshift-machine-config-operator/machine-config-controller,type:*coordination.Lease,resource:leases.coordination.k8s.io 21267ms (13:15:59.961) Trace[855740668]: ---"Txn call completed" 21265ms (13:16:21.228)] Trace[855740668]: [21.267749383s] [21.267749383s] END I0223 13:16:21.228920 14 trace.go:236] Trace[1357468413]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:c6b4ca44-28a4-4049-ade7-bf89997c8a5f,client:10.128.0.22,api-group:coordination.k8s.io,api-version:v1,name:kube-apiserver-operator-lock,subresource:,namespace:openshift-kube-apiserver-operator,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/openshift-kube-apiserver-operator/leases/kube-apiserver-operator-lock,user-agent:cluster-kube-apiserver-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (23-Feb-2026 13:15:53.345) (total time: 27883ms): Trace[1357468413]: ["GuaranteedUpdate etcd3" audit-id:c6b4ca44-28a4-4049-ade7-bf89997c8a5f,key:/leases/openshift-kube-apiserver-operator/kube-apiserver-operator-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 27883ms (13:15:53.345) Trace[1357468413]: ---"Txn call completed" 27881ms (13:16:21.228)] Trace[1357468413]: [27.883188523s] [27.883188523s] END I0223 13:16:21.234323 14 trace.go:236] Trace[1717212403]: "Get" accept:application/json, */*,audit-id:38a5495e-8525-4ee3-afea-b37f3aa95cba,client:10.128.0.7,api-group:coordination.k8s.io,api-version:v1,name:packageserver-controller-lock,subresource:,namespace:openshift-operator-lifecycle-manager,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/openshift-operator-lifecycle-manager/leases/packageserver-controller-lock,user-agent:psm/v0.0.0 (linux/amd64) kubernetes/$Format/leader-election,verb:GET (23-Feb-2026 13:16:05.270) (total time: 15963ms): Trace[1717212403]: ---"About to write a response" 15963ms (13:16:21.234) Trace[1717212403]: [15.963543352s] [15.963543352s] END I0223 13:16:21.234956 14 trace.go:236] Trace[154809105]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:d5987a12-c066-45bd-81d2-3536b2a5ea92,client:10.128.0.17,api-group:coordination.k8s.io,api-version:v1,name:openshift-kube-storage-version-migrator-operator-lock,subresource:,namespace:openshift-kube-storage-version-migrator-operator,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/openshift-kube-storage-version-migrator-operator/leases/openshift-kube-storage-version-migrator-operator-lock,user-agent:cluster-kube-storage-version-migrator-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (23-Feb-2026 13:16:03.344) (total time: 17890ms): Trace[154809105]: ["GuaranteedUpdate etcd3" audit-id:d5987a12-c066-45bd-81d2-3536b2a5ea92,key:/leases/openshift-kube-storage-version-migrator-operator/openshift-kube-storage-version-migrator-operator-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 17890ms (13:16:03.344) Trace[154809105]: ---"Txn call completed" 17889ms (13:16:21.234)] Trace[154809105]: [17.89076371s] [17.89076371s] END I0223 13:16:21.235654 14 trace.go:236] Trace[189503098]: "Update" accept:application/json, */*,audit-id:bd18479c-43a2-46fd-9d3e-085c7f8f162e,client:10.128.0.61,api-group:coordination.k8s.io,api-version:v1,name:machine-config,subresource:,namespace:openshift-machine-config-operator,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/openshift-machine-config-operator/leases/machine-config,user-agent:machine-config-operator/v0.0.0 (linux/amd64) kubernetes/$Format/leader-election,verb:PUT (23-Feb-2026 13:15:51.226) (total time: 30008ms): Trace[189503098]: ["GuaranteedUpdate etcd3" audit-id:bd18479c-43a2-46fd-9d3e-085c7f8f162e,key:/leases/openshift-machine-config-operator/machine-config,type:*coordination.Lease,resource:leases.coordination.k8s.io 30008ms (13:15:51.227) Trace[189503098]: ---"Txn call completed" 30007ms (13:16:21.235)] Trace[189503098]: [30.008867687s] [30.008867687s] END I0223 13:16:21.236234 14 trace.go:236] Trace[861187878]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:1ef1adce-b33e-4fdd-8f7b-164fa6619974,client:10.128.0.9,api-group:coordination.k8s.io,api-version:v1,name:openshift-apiserver-operator-lock,subresource:,namespace:openshift-apiserver-operator,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/openshift-apiserver-operator/leases/openshift-apiserver-operator-lock,user-agent:cluster-openshift-apiserver-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (23-Feb-2026 13:15:58.994) (total time: 22241ms): Trace[861187878]: ["GuaranteedUpdate etcd3" audit-id:1ef1adce-b33e-4fdd-8f7b-164fa6619974,key:/leases/openshift-apiserver-operator/openshift-apiserver-operator-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 22240ms (13:15:58.995) Trace[861187878]: ---"Txn call completed" 22238ms (13:16:21.236)] Trace[861187878]: [22.241236897s] [22.241236897s] END I0223 13:16:21.236836 14 trace.go:236] Trace[1657191590]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:4b3df391-77f3-4830-9b09-601b4b558c68,client:10.128.0.5,api-group:coordination.k8s.io,api-version:v1,name:service-ca-operator-lock,subresource:,namespace:openshift-service-ca-operator,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/openshift-service-ca-operator/leases/service-ca-operator-lock,user-agent:service-ca-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (23-Feb-2026 13:16:03.355) (total time: 17881ms): Trace[1657191590]: ["GuaranteedUpdate etcd3" audit-id:4b3df391-77f3-4830-9b09-601b4b558c68,key:/leases/openshift-service-ca-operator/service-ca-operator-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 17881ms (13:16:03.355) Trace[1657191590]: ---"Txn call completed" 17880ms (13:16:21.236)] Trace[1657191590]: [17.881317167s] [17.881317167s] END I0223 13:16:21.237463 14 trace.go:236] Trace[975268968]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:9779fd1f-e2a2-4ad3-8ea5-4598386636b4,client:10.128.0.12,api-group:coordination.k8s.io,api-version:v1,name:config-operator-lock,subresource:,namespace:openshift-config-operator,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/openshift-config-operator/leases/config-operator-lock,user-agent:cluster-config-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (23-Feb-2026 13:16:05.083) (total time: 16154ms): Trace[975268968]: ["GuaranteedUpdate etcd3" audit-id:9779fd1f-e2a2-4ad3-8ea5-4598386636b4,key:/leases/openshift-config-operator/config-operator-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 16154ms (13:16:05.083) Trace[975268968]: ---"Txn call completed" 16153ms (13:16:21.237)] Trace[975268968]: [16.15423643s] [16.15423643s] END I0223 13:16:21.238303 14 trace.go:236] Trace[875249408]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:60e06ece-1d81-48b8-b161-7d57a9ed53c0,client:10.128.0.30,api-group:coordination.k8s.io,api-version:v1,name:service-ca-controller-lock,subresource:,namespace:openshift-service-ca,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/openshift-service-ca/leases/service-ca-controller-lock,user-agent:service-ca-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (23-Feb-2026 13:15:57.054) (total time: 24183ms): Trace[875249408]: ["GuaranteedUpdate etcd3" audit-id:60e06ece-1d81-48b8-b161-7d57a9ed53c0,key:/leases/openshift-service-ca/service-ca-controller-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 24183ms (13:15:57.054) Trace[875249408]: ---"Txn call completed" 24181ms (13:16:21.237)] Trace[875249408]: [24.183424962s] [24.183424962s] END I0223 13:16:21.238741 14 trace.go:236] Trace[1343073937]: "Get" accept:application/json, */*,audit-id:b3a07013-3841-41e6-a25e-a09f267d326a,client:169.254.0.1,api-group:coordination.k8s.io,api-version:v1,name:cluster-machine-approver-leader,subresource:,namespace:openshift-cluster-machine-approver,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-machine-approver/leases/cluster-machine-approver-leader,user-agent:machine-approver/v0.0.0 (linux/amd64) kubernetes/$Format/leader-election,verb:GET (23-Feb-2026 13:15:26.129) (total time: 55109ms): Trace[1343073937]: ---"About to write a response" 55109ms (13:16:21.238) Trace[1343073937]: [55.109514227s] [55.109514227s] END I0223 13:16:21.238928 14 trace.go:236] Trace[1157200958]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:281073f8-5ee0-437f-bf64-935f7a06a887,client:10.128.0.18,api-group:coordination.k8s.io,api-version:v1,name:kube-controller-manager-operator-lock,subresource:,namespace:openshift-kube-controller-manager-operator,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager-operator/leases/kube-controller-manager-operator-lock,user-agent:cluster-kube-controller-manager-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (23-Feb-2026 13:16:03.346) (total time: 17892ms): Trace[1157200958]: ["GuaranteedUpdate etcd3" audit-id:281073f8-5ee0-437f-bf64-935f7a06a887,key:/leases/openshift-kube-controller-manager-operator/kube-controller-manager-operator-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 17892ms (13:16:03.346) Trace[1157200958]: ---"Txn call completed" 17891ms (13:16:21.238)] Trace[1157200958]: [17.892420156s] [17.892420156s] END I0223 13:16:21.239524 14 trace.go:236] Trace[1755401476]: "Get" accept:application/json, */*,audit-id:9bf0aac2-1115-40bb-baba-9b6b961880d0,client:10.128.0.54,api-group:coordination.k8s.io,api-version:v1,name:cloud-credential-operator-leader,subresource:,namespace:openshift-cloud-credential-operator,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/openshift-cloud-credential-operator/leases/cloud-credential-operator-leader,user-agent:cloud-credential-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:45.188) (total time: 36051ms): Trace[1755401476]: ---"About to write a response" 36050ms (13:16:21.239) Trace[1755401476]: [36.051000858s] [36.051000858s] END I0223 13:16:21.239857 14 trace.go:236] Trace[2069553392]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:abc5b5b2-803a-4817-9389-f9a2a0872c21,client:10.128.0.88,api-group:coordination.k8s.io,api-version:v1,name:openshift-master-controllers,subresource:,namespace:openshift-controller-manager,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/openshift-controller-manager/leases/openshift-master-controllers,user-agent:openshift-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:16:05.003) (total time: 16235ms): Trace[2069553392]: ---"About to write a response" 16235ms (13:16:21.239) Trace[2069553392]: [16.235880916s] [16.235880916s] END I0223 13:16:21.239891 14 trace.go:236] Trace[141758487]: "Get" accept:application/json, */*,audit-id:c00d7df6-3ea0-4008-bcd8-741bc00ff5d0,client:10.128.0.53,api-group:coordination.k8s.io,api-version:v1,name:control-plane-machine-set-leader,subresource:,namespace:openshift-machine-api,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/openshift-machine-api/leases/control-plane-machine-set-leader,user-agent:manager/v0.0.0 (linux/amd64) kubernetes/$Format/leader-election,verb:GET (23-Feb-2026 13:15:39.303) (total time: 41936ms): Trace[141758487]: ---"About to write a response" 41936ms (13:16:21.239) Trace[141758487]: [41.936370288s] [41.936370288s] END I0223 13:16:21.240204 14 trace.go:236] Trace[2006269269]: "Get" accept:application/json, */*,audit-id:3743f9b1-e2c5-4556-af9d-b46997c2025b,client:10.128.0.6,api-group:coordination.k8s.io,api-version:v1,name:marketplace-operator-lock,subresource:,namespace:openshift-marketplace,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/openshift-marketplace/leases/marketplace-operator-lock,user-agent:marketplace-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:21.583) (total time: 59656ms): Trace[2006269269]: ---"About to write a response" 59656ms (13:16:21.240) Trace[2006269269]: [59.656383541s] [59.656383541s] END I0223 13:16:21.240425 14 trace.go:236] Trace[2017186542]: "Get" accept:application/vnd.kubernetes.protobuf, */*,audit-id:5cb89ba5-24f8-41b9-84d7-e7c0a8f3c31f,client:::1,api-group:coordination.k8s.io,api-version:v1,name:apiserver-ikvv4tbsccscc6vslq5b7kflme,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/apiserver-ikvv4tbsccscc6vslq5b7kflme,user-agent:kube-apiserver/v1.31.14 (linux/amd64) kubernetes/8311c4d,verb:GET (23-Feb-2026 13:15:22.898) (total time: 58341ms): Trace[2017186542]: ---"About to write a response" 58341ms (13:16:21.240) Trace[2017186542]: [58.341940922s] [58.341940922s] END I0223 13:16:21.240699 14 trace.go:236] Trace[174597092]: "Update" accept:application/json, */*,audit-id:8b7b5680-4b4f-4ffb-9694-fce09cd94e24,client:192.168.32.10,api-group:coordination.k8s.io,api-version:v1,name:version,subresource:,namespace:openshift-cluster-version,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-version/leases/version,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format/leader-election,verb:PUT (23-Feb-2026 13:15:50.174) (total time: 31065ms): Trace[174597092]: ["GuaranteedUpdate etcd3" audit-id:8b7b5680-4b4f-4ffb-9694-fce09cd94e24,key:/leases/openshift-cluster-version/version,type:*coordination.Lease,resource:leases.coordination.k8s.io 31065ms (13:15:50.175) Trace[174597092]: ---"Txn call completed" 31064ms (13:16:21.240)] Trace[174597092]: [31.065701447s] [31.065701447s] END I0223 13:16:21.240711 14 cacher.go:1017] cacher (leases.coordination.k8s.io): 3 objects queued in incoming channel. I0223 13:16:21.240748 14 cacher.go:1017] cacher (leases.coordination.k8s.io): 4 objects queued in incoming channel. I0223 13:16:21.240761 14 cacher.go:1017] cacher (leases.coordination.k8s.io): 5 objects queued in incoming channel. I0223 13:16:21.240771 14 cacher.go:1017] cacher (leases.coordination.k8s.io): 6 objects queued in incoming channel. I0223 13:16:21.240785 14 cacher.go:1017] cacher (leases.coordination.k8s.io): 7 objects queued in incoming channel. I0223 13:16:21.240792 14 cacher.go:1017] cacher (leases.coordination.k8s.io): 8 objects queued in incoming channel. I0223 13:16:21.240803 14 cacher.go:1017] cacher (leases.coordination.k8s.io): 9 objects queued in incoming channel. I0223 13:16:21.240810 14 cacher.go:1017] cacher (leases.coordination.k8s.io): 10 objects queued in incoming channel. I0223 13:16:21.240820 14 cacher.go:1017] cacher (leases.coordination.k8s.io): 11 objects queued in incoming channel. I0223 13:16:21.240826 14 cacher.go:1017] cacher (leases.coordination.k8s.io): 12 objects queued in incoming channel. I0223 13:16:21.240838 14 cacher.go:1017] cacher (leases.coordination.k8s.io): 13 objects queued in incoming channel. I0223 13:16:21.240845 14 cacher.go:1017] cacher (leases.coordination.k8s.io): 14 objects queued in incoming channel. I0223 13:16:21.240856 14 cacher.go:1017] cacher (leases.coordination.k8s.io): 15 objects queued in incoming channel. I0223 13:16:21.240863 14 cacher.go:1017] cacher (leases.coordination.k8s.io): 16 objects queued in incoming channel. I0223 13:16:21.240874 14 cacher.go:1017] cacher (leases.coordination.k8s.io): 17 objects queued in incoming channel. I0223 13:16:21.240882 14 cacher.go:1017] cacher (leases.coordination.k8s.io): 18 objects queued in incoming channel. I0223 13:16:21.240893 14 cacher.go:1017] cacher (leases.coordination.k8s.io): 19 objects queued in incoming channel. I0223 13:16:21.240900 14 cacher.go:1017] cacher (leases.coordination.k8s.io): 20 objects queued in incoming channel. I0223 13:16:21.240910 14 cacher.go:1017] cacher (leases.coordination.k8s.io): 21 objects queued in incoming channel. I0223 13:16:21.240916 14 cacher.go:1017] cacher (leases.coordination.k8s.io): 22 objects queued in incoming channel. I0223 13:16:21.240926 14 cacher.go:1017] cacher (leases.coordination.k8s.io): 23 objects queued in incoming channel. I0223 13:16:21.240934 14 cacher.go:1017] cacher (leases.coordination.k8s.io): 24 objects queued in incoming channel. I0223 13:16:21.241057 14 trace.go:236] Trace[53965150]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:7d347bde-628c-4219-9cda-8a3bcc80e560,client:10.128.0.89,api-group:coordination.k8s.io,api-version:v1,name:openshift-route-controllers,subresource:,namespace:openshift-route-controller-manager,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/openshift-route-controller-manager/leases/openshift-route-controllers,user-agent:route-controller-manager/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (23-Feb-2026 13:15:53.399) (total time: 27841ms): Trace[53965150]: ["GuaranteedUpdate etcd3" audit-id:7d347bde-628c-4219-9cda-8a3bcc80e560,key:/leases/openshift-route-controller-manager/openshift-route-controllers,type:*coordination.Lease,resource:leases.coordination.k8s.io 27841ms (13:15:53.399) Trace[53965150]: ---"Txn call completed" 27840ms (13:16:21.240)] Trace[53965150]: [27.841843431s] [27.841843431s] END I0223 13:16:21.240260 14 trace.go:236] Trace[1916055322]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:8bbaab2d-1ccb-4b52-bcb8-c86f88178913,client:10.128.0.23,api-group:coordination.k8s.io,api-version:v1,name:openshift-controller-manager-operator-lock,subresource:,namespace:openshift-controller-manager-operator,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/openshift-controller-manager-operator/leases/openshift-controller-manager-operator-lock,user-agent:cluster-openshift-controller-manager-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (23-Feb-2026 13:15:50.180) (total time: 31059ms): Trace[1916055322]: ["GuaranteedUpdate etcd3" audit-id:8bbaab2d-1ccb-4b52-bcb8-c86f88178913,key:/leases/openshift-controller-manager-operator/openshift-controller-manager-operator-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 31059ms (13:15:50.180) Trace[1916055322]: ---"Txn call completed" 31058ms (13:16:21.240)] Trace[1916055322]: [31.059914967s] [31.059914967s] END I0223 13:16:21.241169 14 trace.go:236] Trace[615577310]: "Get" accept:application/json, */*,audit-id:5b36c031-700b-45d0-aeae-95b385e1a28b,client:10.128.0.65,api-group:coordination.k8s.io,api-version:v1,name:machine-api-operator,subresource:,namespace:openshift-machine-api,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/openshift-machine-api/leases/machine-api-operator,user-agent:machine-api-operator/v0.0.0 (linux/amd64) kubernetes/$Format/leader-election,verb:GET (23-Feb-2026 13:15:33.032) (total time: 48208ms): Trace[615577310]: ---"About to write a response" 48208ms (13:16:21.241) Trace[615577310]: [48.208427853s] [48.208427853s] END I0223 13:16:21.241186 14 trace.go:236] Trace[1396462968]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:8c640f54-ff65-42b0-aaf2-75b4171e4891,client:10.128.0.13,api-group:coordination.k8s.io,api-version:v1,name:csi-snapshot-controller-operator-lock,subresource:,namespace:openshift-cluster-storage-operator,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-storage-operator/leases/csi-snapshot-controller-operator-lock,user-agent:csi-snapshot-controller-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (23-Feb-2026 13:15:52.360) (total time: 28880ms): Trace[1396462968]: ["GuaranteedUpdate etcd3" audit-id:8c640f54-ff65-42b0-aaf2-75b4171e4891,key:/leases/openshift-cluster-storage-operator/csi-snapshot-controller-operator-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 28880ms (13:15:52.360) Trace[1396462968]: ---"Txn call completed" 28879ms (13:16:21.240)] Trace[1396462968]: [28.880728442s] [28.880728442s] END E0223 13:16:21.245604 14 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"context canceled\"}: context canceled" logger="UnhandledError" E0223 13:16:21.246202 14 wrap.go:53] "Timeout or abort while handling" logger="UnhandledError" method="GET" URI="/apis/image.openshift.io/v1/namespaces/openshift/imagestreams/driver-toolkit" auditID="38d1c0dc-13ed-4991-823c-53d6564ca4b3" E0223 13:16:21.246233 14 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="1.85µs" method="GET" path="/apis/image.openshift.io/v1/namespaces/openshift/imagestreams/driver-toolkit" result=null E0223 13:16:21.246712 14 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError" E0223 13:16:21.248018 14 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError" E0223 13:16:21.249139 14 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError" E0223 13:16:21.249345 14 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"context canceled\"}: context canceled" logger="UnhandledError" E0223 13:16:21.250959 14 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError" I0223 13:16:21.250995 14 trace.go:236] Trace[1000815139]: "Get" accept:application/json,audit-id:e616d2ba-8918-4cf5-9c4b-188aaf71c5de,client:192.168.32.10,api-group:operator.openshift.io,api-version:v1,name:cluster,subresource:,namespace:,protocol:HTTP/2.0,resource:kubecontrollermanagers,scope:resource,url:/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:57.877) (total time: 23373ms): Trace[1000815139]: [23.373235022s] [23.373235022s] END E0223 13:16:21.251598 14 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="5.874013ms" method="GET" path="/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster" result=null E0223 13:16:21.252018 14 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError" E0223 13:16:21.253056 14 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError" I0223 13:16:21.254167 14 trace.go:236] Trace[1538817394]: "Get" accept:application/json,audit-id:87ff9e1a-bf01-4d57-a049-9cf8484ecdbb,client:192.168.32.10,api-group:config.openshift.io,api-version:v1,name:cluster,subresource:,namespace:,protocol:HTTP/2.0,resource:authentications,scope:resource,url:/apis/config.openshift.io/v1/authentications/cluster,user-agent:cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:46.242) (total time: 35012ms): Trace[1538817394]: [35.012119048s] [35.012119048s] END E0223 13:16:21.254308 14 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="5.001039ms" method="GET" path="/apis/config.openshift.io/v1/authentications/cluster" result=null E0223 13:16:21.259437 14 wrap.go:53] "Timeout or abort while handling" logger="UnhandledError" method="GET" URI="/apis/route.openshift.io/v1/routes?allowWatchBookmarks=true&resourceVersion=14098&timeout=6m59s&timeoutSeconds=419&watch=true" auditID="fa86f8bd-55f7-4649-bf44-68063aeefaa2" E0223 13:16:21.261502 14 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"context canceled\"}: context canceled" logger="UnhandledError" E0223 13:16:21.262270 14 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError" E0223 13:16:21.262380 14 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError" E0223 13:16:21.262466 14 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 46.992µs, panicked: false, err: context canceled, panic-reason: " logger="UnhandledError" E0223 13:16:21.263054 14 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError" E0223 13:16:21.263079 14 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 2.37µs, panicked: false, err: context canceled, panic-reason: " logger="UnhandledError" I0223 13:16:21.263157 14 trace.go:236] Trace[199788089]: "Update" accept:application/json,audit-id:9ca79fc2-1094-4692-8161-7d018fec7396,client:10.128.0.18,api-group:operator.openshift.io,api-version:v1,name:cluster,subresource:status,namespace:,protocol:HTTP/2.0,resource:kubecontrollermanagers,scope:resource,url:/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status,user-agent:cluster-kube-controller-manager-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (23-Feb-2026 13:16:00.377) (total time: 20885ms): Trace[199788089]: ["GuaranteedUpdate etcd3" audit-id:9ca79fc2-1094-4692-8161-7d018fec7396,key:/operator.openshift.io/kubecontrollermanagers/cluster,type:*unstructured.Unstructured,resource:kubecontrollermanagers.operator.openshift.io 20883ms (13:16:00.379) Trace[199788089]: ---"Txn call failed" err:context canceled 20876ms (13:16:21.263)] Trace[199788089]: [20.885640478s] [20.885640478s] END E0223 13:16:21.263187 14 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"context canceled\"}: context canceled" logger="UnhandledError" E0223 13:16:21.263289 14 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 3.32µs, panicked: false, err: context canceled, panic-reason: " logger="UnhandledError" E0223 13:16:21.263292 14 wrap.go:53] "Timeout or abort while handling" logger="UnhandledError" method="PUT" URI="/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status" auditID="9ca79fc2-1094-4692-8161-7d018fec7396" E0223 13:16:21.263333 14 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 1.022209ms, panicked: false, err: context canceled, panic-reason: " logger="UnhandledError" E0223 13:16:21.263357 14 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError" E0223 13:16:21.263473 14 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError" E0223 13:16:21.263477 14 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError" E0223 13:16:21.263615 14 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="305.929µs" method="PUT" path="/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status" result=null E0223 13:16:21.265030 14 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError" E0223 13:16:21.265458 14 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError" E0223 13:16:21.265537 14 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError" E0223 13:16:21.265635 14 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError" E0223 13:16:21.266232 14 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError" E0223 13:16:21.264860 14 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError" I0223 13:16:21.266820 14 trace.go:236] Trace[1355049456]: "Patch" accept:application/json,audit-id:c20a5db7-97d6-4cd6-af36-e588ad42c7a8,client:10.128.0.18,api-group:operator.openshift.io,api-version:v1,name:cluster,subresource:status,namespace:,protocol:HTTP/2.0,resource:kubecontrollermanagers,scope:resource,url:/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status,user-agent:cluster-kube-controller-manager-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:APPLY (23-Feb-2026 13:16:03.685) (total time: 17581ms): Trace[1355049456]: ["GuaranteedUpdate etcd3" audit-id:c20a5db7-97d6-4cd6-af36-e588ad42c7a8,key:/operator.openshift.io/kubecontrollermanagers/cluster,type:*unstructured.Unstructured,resource:kubecontrollermanagers.operator.openshift.io 17581ms (13:16:03.685) Trace[1355049456]: ---"Txn call failed" err:context canceled 17567ms (13:16:21.262)] Trace[1355049456]: [17.581346962s] [17.581346962s] END E0223 13:16:21.266989 14 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.799674ms" method="PATCH" path="/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status" result=null I0223 13:16:21.267058 14 trace.go:236] Trace[1808149945]: "Patch" accept:application/json,audit-id:411d6c3b-b7da-441b-8dab-9f36ac7d00ae,client:10.128.0.18,api-group:operator.openshift.io,api-version:v1,name:cluster,subresource:status,namespace:,protocol:HTTP/2.0,resource:kubecontrollermanagers,scope:resource,url:/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status,user-agent:cluster-kube-controller-manager-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:APPLY (23-Feb-2026 13:16:01.179) (total time: 20087ms): Trace[1808149945]: ["GuaranteedUpdate etcd3" audit-id:411d6c3b-b7da-441b-8dab-9f36ac7d00ae,key:/operator.openshift.io/kubecontrollermanagers/cluster,type:*unstructured.Unstructured,resource:kubecontrollermanagers.operator.openshift.io 20086ms (13:16:01.180) Trace[1808149945]: ---"Txn call failed" err:context canceled 20071ms (13:16:21.262)] Trace[1808149945]: [20.087176904s] [20.087176904s] END E0223 13:16:21.267136 14 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.843545ms" method="PATCH" path="/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status" result=null E0223 13:16:21.267162 14 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError" E0223 13:16:21.267456 14 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError" I0223 13:16:21.268262 14 trace.go:236] Trace[2007263369]: "Get" accept:application/json,audit-id:4c3e7e68-2e2a-4779-99c0-3016846c92ba,client:10.128.0.18,api-group:operator.openshift.io,api-version:v1,name:cluster,subresource:,namespace:,protocol:HTTP/2.0,resource:kubecontrollermanagers,scope:resource,url:/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster,user-agent:cluster-kube-controller-manager-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:34.733) (total time: 46535ms): Trace[2007263369]: [46.535133166s] [46.535133166s] END E0223 13:16:21.268355 14 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="6.899242ms" method="GET" path="/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster" result=null E0223 13:16:21.268483 14 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError" I0223 13:16:21.268633 14 trace.go:236] Trace[1725694668]: "Patch" accept:application/json,audit-id:7c836baf-2dfa-4b6f-a9d3-735a700b063f,client:10.128.0.18,api-group:operator.openshift.io,api-version:v1,name:cluster,subresource:status,namespace:,protocol:HTTP/2.0,resource:kubecontrollermanagers,scope:resource,url:/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status,user-agent:cluster-kube-controller-manager-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:APPLY (23-Feb-2026 13:15:58.077) (total time: 23191ms): Trace[1725694668]: ["GuaranteedUpdate etcd3" audit-id:7c836baf-2dfa-4b6f-a9d3-735a700b063f,key:/operator.openshift.io/kubecontrollermanagers/cluster,type:*unstructured.Unstructured,resource:kubecontrollermanagers.operator.openshift.io 23191ms (13:15:58.077) Trace[1725694668]: ---"Txn call failed" err:context canceled 23176ms (13:16:21.262)] Trace[1725694668]: [23.191410782s] [23.191410782s] END E0223 13:16:21.269165 14 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="6.103931ms" method="PATCH" path="/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster/status" result=null I0223 13:16:21.270394 14 trace.go:236] Trace[587529001]: "Get" accept:application/json,audit-id:151c56e0-6952-4129-baec-c5351247076c,client:10.128.0.18,api-group:operator.openshift.io,api-version:v1,name:cluster,subresource:,namespace:,protocol:HTTP/2.0,resource:kubecontrollermanagers,scope:resource,url:/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster,user-agent:cluster-kube-controller-manager-operator/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (23-Feb-2026 13:15:30.495) (total time: 50774ms): Trace[587529001]: [50.774395165s] [50.774395165s] END E0223 13:16:21.270588 14 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="7.792597ms" method="GET" path="/apis/operator.openshift.io/v1/kubecontrollermanagers/cluster" result=null ---